[pve-devel] applied: [pmg-devel] [PATCH pmg-gui 1/1] overwrite run_editor of base class
applied ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] applied: [pmg-devel] [PATCH manager 1/1] overwrite the built-in 'run_editor' function
applied ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] applied: [PATCH widget-toolkit 2/2] add a checkbox to edit windows for advanced options
applied ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] applied: [PATCH widget-toolkit 1/2] add advanced options to the input panel
applied ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH widget-toolkit 2/2] add a checkbox to edit windows for advanced options
Am 04/05/2018 um 04:03 PM schrieb Dominik Csapak: if the inputpanel has advanced options, show a checkbox to show/hide them Signed-off-by: Dominik Csapak--- window/Edit.js | 39 +++ 1 file changed, 39 insertions(+) diff --git a/window/Edit.js b/window/Edit.js index f72bee0..8d5aa19 100644 --- a/window/Edit.js +++ b/window/Edit.js @@ -274,6 +274,23 @@ Ext.define('Proxmox.window.Edit', { var dirty = form.isDirty(); submitBtn.setDisabled(!valid || !(dirty || me.isCreate)); resetBtn.setDisabled(!dirty); + + if (inputPanel && inputPanel.hasAdvanced) { + // we want to show the advanced options + // as soon as some of it is not valid + var advancedItems = me.down('#advancedContainer').query('field'); + var valid = true; + advancedItems.forEach(function(field) { + if (!field.isValid()) { + valid = false; + } + }); Do you want to run isValid on all fields, as else only the first invalid will be shown? If so include this info in the comment, e.g., something like: // tell user why the panel is invalid, show all invalid advanced items always else use: valid = advancedItems.every(function(f) { return f.isValid() }); + + if (!valid) { + inputPanel.setAdvancedVisible(true); + me.down('#advancedcb').setValue(true); + } + } }; form.on('dirtychange', set_button_status); @@ -297,6 +314,28 @@ Ext.define('Proxmox.window.Edit', { me.buttons = [ submitBtn, resetBtn ]; } + if (inputPanel && inputPanel.hasAdvanced) { + var sp = Ext.state.Manager.getProvider(); + var advchecked = sp.get('proxmox-advanced-cb'); + inputPanel.setAdvancedVisible(advchecked); + me.buttons.unshift( + { + xtype: 'proxmoxcheckbox', + itemId: 'advancedcb', + boxLabelAlign: 'before', + boxLabel: gettext('Advanced'), + stateId: 'proxmox-advanced-cb', + value: advchecked, + listeners: { + change: function(cb, val) { + inputPanel.setAdvancedVisible(val); + sp.set('proxmox-advanced-cb', val); + } + } + } + ); + } + var onlineHelp = me.onlineHelp; if (!onlineHelp && inputPanel && inputPanel.onlineHelp) { onlineHelp = inputPanel.onlineHelp; ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [RFC qemu-server] API/create: move locking inside worker
Move the locking inside worker, so that the process doing the actual work (create or restore) holds the lock, and can call functions which do locking without deadlocking. This mirrors the behaviour we use for containers, and allows to add an 'autostart' parameter which starts the VM after successful creation. vm_start needs the lock and as not the worker but it's parents held it, it couldn't know that it was actually save to continue... Signed-off-by: Thomas Lamprecht--- I discussed this with Fabian a few months ago and have something in mind that this shouldn't be that easy, but I cannot remember what exactly that reason was, so RFC. :) PVE/API2/Qemu.pm | 9 ++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm index 0f27d29..8695e94 100644 --- a/PVE/API2/Qemu.pm +++ b/PVE/API2/Qemu.pm @@ -560,7 +560,7 @@ __PACKAGE__->register_method({ # ensure no old replication state are exists PVE::ReplicationState::delete_guest_states($vmid); - return $rpcenv->fork_worker('qmrestore', $vmid, $authuser, $realcmd); + return PVE::QemuConfig->lock_config_full($vmid, 1, $realcmd); }; my $createfn = sub { @@ -607,10 +607,13 @@ __PACKAGE__->register_method({ PVE::AccessControl::add_vm_to_pool($vmid, $pool) if $pool; }; - return $rpcenv->fork_worker('qmcreate', $vmid, $authuser, $realcmd); + return PVE::QemuConfig->lock_config_full($vmid, 1, $realcmd); }; - return PVE::QemuConfig->lock_config_full($vmid, 1, $archive ? $restorefn : $createfn); + my $worker_name = $archive ? 'qmrestore' : 'qmcreate'; + my $code = $archive ? $restorefn : $createfn; + + return $rpcenv->fork_worker($worker_name, $vmid, $authuser, $code); }}); __PACKAGE__->register_method({ -- 2.14.2 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH manager] merge cores, cpulimit and cpuunits for containers
like we do for qemu vms Signed-off-by: Dominik Csapak--- www/manager6/lxc/Resources.js | 41 +++-- 1 file changed, 23 insertions(+), 18 deletions(-) diff --git a/www/manager6/lxc/Resources.js b/www/manager6/lxc/Resources.js index c4ae2db8..676ad340 100644 --- a/www/manager6/lxc/Resources.js +++ b/www/manager6/lxc/Resources.js @@ -59,32 +59,37 @@ Ext.define('PVE.lxc.RessourceView', { defaultValue: '', tdCls: 'pve-itype-icon-processor', renderer: function(value) { - if (value) { return value; } - return gettext('unlimited'); - } - }, - cpulimit: { - header: gettext('CPU limit'), - editor: caps.vms['VM.Config.CPU'] ? 'PVE.lxc.CPUEdit' : undefined, - defaultValue: 0, - tdCls: 'pve-itype-icon-processor', - renderer: function(value) { - if (value > 0) { return value; } - return gettext('unlimited'); + var cpulimit = me.getObjectValue('cpulimit'); + var cpuunits = me.getObjectValue('cpuunits'); + var res; + if (value) { + res = value; + } else { + res = gettext('unlimited'); + } + + if (cpulimit) { + res += ' [cpulimit=' + cpulimit + ']'; + } + + if (cpuunits) { + res += ' [cpuunits=' + cpuunits + ']'; + } + return res; } }, - cpuunits: { - header: gettext('CPU units'), - editor: caps.vms['VM.Config.CPU'] ? 'PVE.lxc.CPUEdit' : undefined, - defaultValue: 1024, - tdCls: 'pve-itype-icon-processor' - }, rootfs: { header: gettext('Root Disk'), defaultValue: Proxmox.Utils.noneText, editor: mpeditor, tdCls: 'pve-itype-icon-storage' }, + cpulimit: { + visible: false + }, + cpuunits: { + visible: false + }, unprivileged: { visible: false } -- 2.11.0 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH manager 1/1] overwrite the built-in 'run_editor' function
this is needed for the 'enter' handler Signed-off-by: Dominik Csapak--- www/manager6/lxc/DNS.js | 1 + www/manager6/qemu/HardwareView.js | 1 + 2 files changed, 2 insertions(+) diff --git a/www/manager6/lxc/DNS.js b/www/manager6/lxc/DNS.js index 3287ae91..4eab6b11 100644 --- a/www/manager6/lxc/DNS.js +++ b/www/manager6/lxc/DNS.js @@ -229,6 +229,7 @@ Ext.define('PVE.lxc.DNS', { url: "/api2/json/nodes/" + nodename + "/lxc/" + vmid + "/config", selModel: sm, cwidth1: 150, + run_editor: run_editor, tbar: [ edit_btn ], rows: rows, listeners: { diff --git a/www/manager6/qemu/HardwareView.js b/www/manager6/qemu/HardwareView.js index dcdea14c..8017f84a 100644 --- a/www/manager6/qemu/HardwareView.js +++ b/www/manager6/qemu/HardwareView.js @@ -496,6 +496,7 @@ Ext.define('PVE.qemu.HardwareView', { url: '/api2/json/' + 'nodes/' + nodename + '/qemu/' + vmid + '/pending', interval: 5000, selModel: sm, + run_editor: run_editor, tbar: [ { text: gettext('Add'), -- 2.11.0 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH widget-toolkit/manager/pmg-gui] allow pressing enter in ObjectGrids
this series makes it possible to press enter to open the editor of (pending)ObjectGrids proxmox-widget-toolkit Dominik Csapak (1): allow pressing enter in ObjectGrids to edit a field grid/ObjectGrid.js | 8 node/DNSView.js| 1 + node/TimeView.js | 1 + 3 files changed, 10 insertions(+) pve-manager Dominik Csapak (1): overwrite the built-in 'run_editor' function www/manager6/lxc/DNS.js | 1 + www/manager6/qemu/HardwareView.js | 1 + 2 files changed, 2 insertions(+) pmg-gui Dominik Csapak (1): overwrite run_editor of base class js/ActionList.js | 1 + js/MyNetworks.js | 1 + js/ObjectGroup.js | 1 + js/RelayDomains.js | 1 + js/Transport.js| 1 + 5 files changed, 5 insertions(+) -- 2.11.0 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH widget-toolkit 1/1] allow pressing enter in ObjectGrids to edit a field
for this we need to overwrite the 'run_editor' function of the ObjectGrid if we use a custom one Signed-off-by: Dominik Csapak--- grid/ObjectGrid.js | 8 node/DNSView.js| 1 + node/TimeView.js | 1 + 3 files changed, 10 insertions(+) diff --git a/grid/ObjectGrid.js b/grid/ObjectGrid.js index bd294c8..68937ce 100644 --- a/grid/ObjectGrid.js +++ b/grid/ObjectGrid.js @@ -224,6 +224,14 @@ Ext.define('Proxmox.grid.ObjectGrid', { return value; }, +listeners: { + itemkeyup: function(view, record, item, index, e) { + if (e.getKey() === e.ENTER) { + this.run_editor(); + } + } +}, + initComponent : function() { var me = this; diff --git a/node/DNSView.js b/node/DNSView.js index 2df2dac..b0f0973 100644 --- a/node/DNSView.js +++ b/node/DNSView.js @@ -20,6 +20,7 @@ Ext.define('Proxmox.node.DNSView', { url: "/api2/json/nodes/" + me.nodename + "/dns", cwidth1: 130, interval: 1000, + run_editor: run_editor, rows: { search: { header: 'Search domain', required: true }, dns1: { header: gettext('DNS server') + " 1", required: true }, diff --git a/node/TimeView.js b/node/TimeView.js index 0cf68eb..27de02d 100644 --- a/node/TimeView.js +++ b/node/TimeView.js @@ -26,6 +26,7 @@ Ext.define('Proxmox.node.TimeView', { url: "/api2/json/nodes/" + me.nodename + "/time", cwidth1: 150, interval: 1000, + run_editor: run_editor, rows: { timezone: { header: gettext('Time zone'), -- 2.11.0 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH pmg-gui 1/1] overwrite run_editor of base class
so that every call lands in the custom run_editor Signed-off-by: Dominik Csapak--- js/ActionList.js | 1 + js/MyNetworks.js | 1 + js/ObjectGroup.js | 1 + js/RelayDomains.js | 1 + js/Transport.js| 1 + 5 files changed, 5 insertions(+) diff --git a/js/ActionList.js b/js/ActionList.js index 213afb2..af5e2e3 100644 --- a/js/ActionList.js +++ b/js/ActionList.js @@ -121,6 +121,7 @@ Ext.define('PMG.ActionList', { } Ext.apply(me, { + run_editor: run_editor, columns: [ { header: gettext('Name'), diff --git a/js/MyNetworks.js b/js/MyNetworks.js index 955b3fa..6ef0022 100644 --- a/js/MyNetworks.js +++ b/js/MyNetworks.js @@ -114,6 +114,7 @@ Ext.define('PMG.MyNetworks', { Ext.apply(me, { store: store, tbar: tbar, + run_editor: run_editor, viewConfig: { trackOver: false }, diff --git a/js/ObjectGroup.js b/js/ObjectGroup.js index 1e76ef3..0de4da3 100644 --- a/js/ObjectGroup.js +++ b/js/ObjectGroup.js @@ -238,6 +238,7 @@ Ext.define('PMG.ObjectGroup', { Proxmox.Utils.monStoreErrors(me, me.store, true); Ext.apply(me, { + run_editor: run_editor, listeners: { itemdblclick: run_editor, activate: reload diff --git a/js/RelayDomains.js b/js/RelayDomains.js index 8945b2d..26560c1 100644 --- a/js/RelayDomains.js +++ b/js/RelayDomains.js @@ -114,6 +114,7 @@ Ext.define('PMG.RelayDomains', { Ext.apply(me, { store: store, tbar: tbar, + run_editor: run_editor, viewConfig: { trackOver: false }, diff --git a/js/Transport.js b/js/Transport.js index b5e20d8..f125835 100644 --- a/js/Transport.js +++ b/js/Transport.js @@ -133,6 +133,7 @@ Ext.define('PMG.Transport', { Ext.apply(me, { store: store, tbar: tbar, + run_editor: run_editor, viewConfig: { trackOver: false }, -- 2.11.0 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] applied: [PATCH widget-toolkit] better default focus selection
applied ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [RFC qemu-server] API/create: move locking inside worker
On Fri, Apr 06, 2018 at 11:54:03AM +0200, Thomas Lamprecht wrote: > Move the locking inside worker, so that the process doing the actual > work (create or restore) holds the lock, and can call functions which > do locking without deadlocking. > > This mirrors the behaviour we use for containers, and allows to add > an 'autostart' parameter which starts the VM after successful > creation. vm_start needs the lock and as not the worker but it's > parents held it, it couldn't know that it was actually save to > continue... > > Signed-off-by: Thomas Lamprecht> --- > > I discussed this with Fabian a few months ago and have something in > mind that this shouldn't be that easy, but I cannot remember what > exactly that reason was, so RFC. :) there is one issue - if somebody holds the flock and you only realize it after you have forked, you did a fork for nothing (and instead of a rather fast "timeout" error message, you have to check the task log. this is not nice from a usability perspective, although it should not cause problems from a technical/lockdep one ;) the clean solution is flock { set_config_lock } fork { do stuff flock { (re)read config check_config_lock modify/write config } do some more stuff flock { (re)read config check_config_lock remove_config_lock final modify/write config } } of course, this is much more involved and harder to get all corner cases right ;) I would like to move all the create/restore/clone API paths to this flow in the long run, but I am not opposed to switching the fork/lock order in places where we need the flock sooner/now. > > PVE/API2/Qemu.pm | 9 ++--- > 1 file changed, 6 insertions(+), 3 deletions(-) > > diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm > index 0f27d29..8695e94 100644 > --- a/PVE/API2/Qemu.pm > +++ b/PVE/API2/Qemu.pm > @@ -560,7 +560,7 @@ __PACKAGE__->register_method({ > # ensure no old replication state are exists > PVE::ReplicationState::delete_guest_states($vmid); > > - return $rpcenv->fork_worker('qmrestore', $vmid, $authuser, > $realcmd); > + return PVE::QemuConfig->lock_config_full($vmid, 1, $realcmd); > }; > > my $createfn = sub { > @@ -607,10 +607,13 @@ __PACKAGE__->register_method({ > PVE::AccessControl::add_vm_to_pool($vmid, $pool) if $pool; > }; > > - return $rpcenv->fork_worker('qmcreate', $vmid, $authuser, $realcmd); > + return PVE::QemuConfig->lock_config_full($vmid, 1, $realcmd); > }; > > - return PVE::QemuConfig->lock_config_full($vmid, 1, $archive ? > $restorefn : $createfn); > + my $worker_name = $archive ? 'qmrestore' : 'qmcreate'; > + my $code = $archive ? $restorefn : $createfn; > + > + return $rpcenv->fork_worker($worker_name, $vmid, $authuser, $code); > }}); > > __PACKAGE__->register_method({ > -- > 2.14.2 > > > ___ > pve-devel mailing list > pve-devel@pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] applied: [PATCH manager] merge cores, cpulimit and cpuunits for containers
applied ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] applied: [PATCH widget-toolkit 1/1] allow pressing enter in ObjectGrids to edit a field
applied ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH manager] better focus of lxc/MPEdit
this gives the focus to either the 'path' field or to the 'X' tool (if it is a rootfs) Signed-off-by: Dominik Csapak--- www/manager6/lxc/MPEdit.js | 1 + 1 file changed, 1 insertion(+) diff --git a/www/manager6/lxc/MPEdit.js b/www/manager6/lxc/MPEdit.js index 37f725ea..cf8d6f02 100644 --- a/www/manager6/lxc/MPEdit.js +++ b/www/manager6/lxc/MPEdit.js @@ -324,6 +324,7 @@ Ext.define('PVE.lxc.MountPointEdit', { Ext.apply(me, { subject: subject, + defaultFocus: me.confid !== 'rootfs' ? 'textfield[name=mp]' : 'tool', items: ipanel }); -- 2.11.0 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH widget-toolkit] better default focus selection
we do not want to focus on hidden/disabled fields, because that focus gets lost and prevents some things e.g. cancelling with ESC Signed-off-by: Dominik Csapak--- window/Edit.js | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/window/Edit.js b/window/Edit.js index 8d5aa19..9548e21 100644 --- a/window/Edit.js +++ b/window/Edit.js @@ -34,7 +34,7 @@ Ext.define('Proxmox.window.Edit', { defaultButton: 'submitbutton', // finds the first form field -defaultFocus: 'field', +defaultFocus: 'field[disabled=false][hidden=false]', showProgress: false, -- 2.11.0 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [RFC qemu-server] API/create: move locking inside worker
Am 04/06/2018 um 12:28 PM schrieb Fabian Grünbichler: On Fri, Apr 06, 2018 at 11:54:03AM +0200, Thomas Lamprecht wrote: Move the locking inside worker, so that the process doing the actual work (create or restore) holds the lock, and can call functions which do locking without deadlocking. This mirrors the behaviour we use for containers, and allows to add an 'autostart' parameter which starts the VM after successful creation. vm_start needs the lock and as not the worker but it's parents held it, it couldn't know that it was actually save to continue... Signed-off-by: Thomas Lamprecht--- I discussed this with Fabian a few months ago and have something in mind that this shouldn't be that easy, but I cannot remember what exactly that reason was, so RFC. :) there is one issue - if somebody holds the flock and you only realize it after you have forked, you did a fork for nothing (and instead of a rather fast "timeout" error message, you have to check the task log. this is not nice from a usability perspective, although it should not cause problems from a technical/lockdep one ;) Ah, yeah, I could add a simple check if the VM is already locked before forking the worker, I mean that's obviously racy but should catch most cases, and we do that elsewhere too, AFAIK. I'll try to add it and just send it out, if it's accetable to you then we could apply it as a temporary improvement :) the clean solution is flock { set_config_lock } fork { do stuff flock { (re)read config check_config_lock modify/write config } do some more stuff flock { (re)read config check_config_lock remove_config_lock final modify/write config } } of course, this is much more involved and harder to get all corner cases right ;) I would like to move all the create/restore/clone API paths to this flow in the long run, but I am not opposed to switching the fork/lock order in places where we need the flock sooner/now. Ah yes I remember now, thanks! I'll try to give that a look in one and a half week, after my vacation. cheers, Thomas ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [RFC qemu-server] API/create: move locking inside worker
Am 04/06/2018 um 01:24 PM schrieb Thomas Lamprecht: Am 04/06/2018 um 12:28 PM schrieb Fabian Grünbichler: On Fri, Apr 06, 2018 at 11:54:03AM +0200, Thomas Lamprecht wrote: Move the locking inside worker, so that the process doing the actual work (create or restore) holds the lock, and can call functions which do locking without deadlocking. This mirrors the behaviour we use for containers, and allows to add an 'autostart' parameter which starts the VM after successful creation. vm_start needs the lock and as not the worker but it's parents held it, it couldn't know that it was actually save to continue... Signed-off-by: Thomas Lamprecht--- I discussed this with Fabian a few months ago and have something in mind that this shouldn't be that easy, but I cannot remember what exactly that reason was, so RFC. :) there is one issue - if somebody holds the flock and you only realize it after you have forked, you did a fork for nothing (and instead of a rather fast "timeout" error message, you have to check the task log. this is not nice from a usability perspective, although it should not cause problems from a technical/lockdep one ;) Ah, yeah, I could add a simple check if the VM is already locked before forking the worker, I mean that's obviously racy but should catch most cases, and we do that elsewhere too, AFAIK. I'll try to add it and just send it out, if it's accetable to you then we could apply it as a temporary improvement :) OK, scratch it, without calling flock myself (here or in a new method) I cannot do a timeout-less 'check_lock', AFAIS, so better just do the right thing (after vacantion) ;) ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] applied: [PATCH manager] better focus of lxc/MPEdit
applied ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH manager] fix editor and set_button_status for cloudinit
cloudinit images do not have an editor so return here and value has to be a string to match also set the remove button text correctly when selecting a cloudinit disk Signed-off-by: Dominik Csapak--- www/manager6/qemu/HardwareView.js | 8 +--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/www/manager6/qemu/HardwareView.js b/www/manager6/qemu/HardwareView.js index e726cb70..b597e172 100644 --- a/www/manager6/qemu/HardwareView.js +++ b/www/manager6/qemu/HardwareView.js @@ -248,7 +248,9 @@ Ext.define('PVE.qemu.HardwareView', { var editor = rowdef.editor; if (rowdef.tdCls == 'pve-itype-icon-storage') { var value = me.getObjectValue(rec.data.key, '', true); - if (value.match(/media=cdrom/)) { + if (value.match(/vm-.*-cloudinit/)) { + return; + } else if (value.match(/media=cdrom/)) { editor = 'PVE.qemu.CDEdit'; } } @@ -518,12 +520,12 @@ Ext.define('PVE.qemu.HardwareView', { rowdef.tdCls == 'pve-itype-icon-storage' && (value && !value.match(/media=cdrom/)); - var isCloudInit = (value && value.match(/vm-.*-cloudinit/)); + var isCloudInit = (value && value.toString().match(/vm-.*-cloudinit/)); var isEfi = (key === 'efidisk0'); remove_btn.setDisabled(rec.data['delete'] || (rowdef.never_delete === true)); - remove_btn.setText(isUsedDisk ? remove_btn.altText : remove_btn.defaultText); + remove_btn.setText((isUsedDisk && !isCloudInit) ? remove_btn.altText : remove_btn.defaultText); edit_btn.setDisabled(rec.data['delete'] || !rowdef.editor || isCloudInit); -- 2.11.0 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] applied: [PATCH docs] Add documentation for CIFS Storage Plugin.
thanks, applied, with cleanup followup Am 04/05/2018 um 02:08 PM schrieb Wolfgang Link: --- pve-intro.adoc| 3 +- pve-storage-cifs.adoc | 99 +++ pvesm.adoc| 5 +++ qm.adoc | 2 +- vzdump.adoc | 2 +- 5 files changed, 108 insertions(+), 3 deletions(-) create mode 100644 pve-storage-cifs.adoc diff --git a/pve-intro.adoc b/pve-intro.adoc index 1188e77..f0b0d1e 100644 --- a/pve-intro.adoc +++ b/pve-intro.adoc @@ -106,6 +106,7 @@ We currently support the following Network storage types: * LVM Group (network backing with iSCSI targets) * iSCSI target * NFS Share +* CIFS Share * Ceph RBD * Directly use iSCSI LUNs * GlusterFS @@ -125,7 +126,7 @@ running Containers and KVM guests. It basically creates an archive of the VM or CT data which includes the VM/CT configuration files. KVM live backup works for all storage types including VM images on -NFS, iSCSI LUN, Ceph RBD or Sheepdog. The new backup format is +NFS, CIFS, iSCSI LUN, Ceph RBD or Sheepdog. The new backup format is optimized for storing VM backups fast and effective (sparse files, out of order data, minimized I/O). diff --git a/pve-storage-cifs.adoc b/pve-storage-cifs.adoc new file mode 100644 index 000..696d809 --- /dev/null +++ b/pve-storage-cifs.adoc @@ -0,0 +1,99 @@ +CIFS Backend +--- +ifdef::wiki[] +:pve-toplevel: +:title: Storage: CIFS +endif::wiki[] + +Storage pool type: `cifs` + +The CIFS backend is based on the directory backend, so it shares most +properties. The directory layout and the file naming conventions are +the same. The main advantage is that you can directly configure the +CIFS server, so the backend can mount the share automatically in +the hole cluster. There is no need to modify `/etc/fstab`. The backend +can also test if the server is online, and provides a method to query +the server for exported shares. + +Configuration +~ + +The backend supports all common storage properties, except the shared +flag, which is always set. Additionally, the following properties are +used to configure the CIFS server: + +server:: + +Server IP or DNS name. To avoid DNS lookup delays, it is usually +preferable to use an IP address instead of a DNS name - unless you +have a very reliable DNS server, or list the server in the local +`/etc/hosts` file. + +share:: + +CIFS share (as listed by `pvesm cifsscan`). + +Optional properties: + +username:: + +If not presents, "guest" is used. + +password:: + +The user password. +It will be saved in a private directory (/etc/pve/priv/.cred). + +domain:: + +sets the domain (workgroup) of the user + +smbversion:: + +SMB protocol Version (default is `3`). +SMB1 is not supported due to security issues. + +path:: + +The local mount point (defaults to `/mnt/pve//`). + +.Configuration Example (`/etc/pve/storage.cfg`) + +cifs: backup + path /mnt/pve/backup + server 10.0.0.11 + share VMData + content backup + username anna + smbversion 3 + + + +Storage Features + + +CIFS does not support snapshots, but the backend uses `qcow2` features +to implement snapshots and cloning. + +.Storage features for backend `nfs` +[width="100%",cols="m,m,3*d",options="header"] +|== +|Content types |Image formats |Shared |Snapshots |Clones +|images rootdir vztmpl iso backup |raw qcow2 vmdk subvol |yes|qcow2 |qcow2 +|== + +Examples + + +You can get a list of exported CIFS shares with: + + # pvesm cifsscan [--username ] [--password] + +ifdef::wiki[] + +See Also + + +* link:/wiki/Storage[Storage] + +endif::wiki[] diff --git a/pvesm.adoc b/pvesm.adoc index 62d190e..1d55d59 100644 --- a/pvesm.adoc +++ b/pvesm.adoc @@ -71,6 +71,7 @@ snapshots and clones. |ZFS (local)|zfspool |file |no|yes |yes |Directory |dir |file |no|no^1^|yes |NFS|nfs |file |yes |no^1^|yes +|CIFS |cifs|file |yes |no^1^|yes |GlusterFS |glusterfs |file |yes |no^1^|yes |LVM|lvm |block |no^2^ |no |yes |LVM-thin |lvmthin |block |no|yes |yes @@ -370,6 +371,8 @@ See Also * link:/wiki/Storage:_NFS[Storage: NFS] +* link:/wiki/Storage:_CIFS[Storage: CIFS] + * link:/wiki/Storage:_RBD[Storage: RBD] * link:/wiki/Storage:_ZFS[Storage: ZFS] @@ -386,6 +389,8 @@ include::pve-storage-dir.adoc[] include::pve-storage-nfs.adoc[] +include::pve-storage-cifs.adoc[] + include::pve-storage-glusterfs.adoc[] include::pve-storage-zfspool.adoc[] diff --git a/qm.adoc b/qm.adoc index 154c5c1..5fba463 100644 --- a/qm.adoc +++ b/qm.adoc @@ -163,7 +163,7 @@ On each controller you attach a number
[pve-devel] applied: [PATCH manager 01/11] add advanced checkbox to the wizard
applied all (11) patches ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] applied: [PATCH manager] fix editor and set_button_status for cloudinit
applied ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] applied: [PATCH docs 1/2] fix wording for new memory dialog for qemu
applied ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] applied: [PATCH docs 2/2] mention that NAT mode is not available on the WebUI
applied ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel