[pve-devel] applied: [PATCH pve-container] Do not skip unprivileged config parameter when restoring a container
applied ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH container 2/2] add restart migration to lxc api
> @@ -880,10 +891,13 @@ __PACKAGE__->register_method({ > # test if VM exists > PVE::LXC::Config->load_config($vmid); > > + my $isrunning = PVE::LXC::check_running($vmid); > # try to detect errors early > - if (PVE::LXC::check_running($vmid)) { > - die "can't migrate running container without --online\n" > - if !$param->{online}; > + if ($isrunning) { > + die "lxc live migration not implemented\n" > + if $param->{online}; > + die "running container needs restart mode for migration\n" > + if !$param->{restart}; Maybe it is worth to factor out this check into a separate function, because you use the same check in a previous patch. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH container 1/2] implement lxc restart migration
> @@ -322,6 +338,14 @@ sub final_cleanup { > my $cmd = [ @{$self->{rem_ssh}}, 'pct', 'unlock', $vmid ]; > $self->cmd_logerr($cmd, errmsg => "failed to clear migrate lock"); > } > + > +# in restart mode, we start the container on the target node > +# after migration > +if ($self->{opts}->{restart}) { > + $self->log('info', "start container on target node"); > + my $cmd = [ @{$self->{rem_ssh}}, 'pct', 'start', $vmid]; > + $self->cmd($cmd); > +} > } You always start the container here, even if it was stopped before? ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH manager 1/3] add onlineonly and sharedonly to migrateall api call
> > + sharedonly => { > > + description => "Migrate only those guests with only shared > > storage", > > + optional => 1, > > + type => 'boolean' > > + }, > > + onlineonly => { > > + description => "Migrate only those guests with only shared > > storage", > > description is wrong! > > > + optional => 1, > > + type => 'boolean' > > + }, > > I can imagine another scenario: > > - Migrate only those guests which are offline > > so it is maybe better to define some filter enum: > > filter => { > type => 'string-list' > description => "Migrate only those guests which match the filter.", > optional => 1, > enum => ['running', 'stopped', 'shared'], > } > > So you can select to migrate all stopped VMs on shared > storage with "--filter 'stopped,shared'" And maybe other filter like 'qemu' => only migrate qemu VMs 'lxc' => only migrate LXC container ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH qemu-server] implement is_shared_only check
Your implementation also ignores snapshots (which can contain further local disks). We already have some code to detect if a VM only uses shared resources in PVE::QemuMigrate::sync_disks(). And please note that a shared storage may not be available on all nodes, see PVE::QemuServer::check_storage_availability(). ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH manager 1/3] add onlineonly and sharedonly to migrateall api call
> @@ -1538,6 +1553,16 @@ __PACKAGE__->register_method ({ > type => 'integer', > minimum => 1 > }, > + sharedonly => { > + description => "Migrate only those guests with only shared > storage", > + optional => 1, > + type => 'boolean' > + }, > + onlineonly => { > + description => "Migrate only those guests with only shared > storage", description is wrong! > + optional => 1, > + type => 'boolean' > + }, I can imagine another scenario: - Migrate only those guests which are offline so it is maybe better to define some filter enum: filter => { type => 'string-list' description => "Migrate only those guests which match the filter.", optional => 1, enum => ['running', 'stopped', 'shared'], } So you can select to migrate all stopped VMs on shared storage with "--filter 'stopped,shared'" ?? ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH qemu-server] implement is_shared_only check
> > +sub is_shared_only { > +my ($class, $conf, $scfg) = @_; > + > +my $issharedonly = 1; > +PVE::QemuServer::foreach_drive($conf, sub { > + my ($ds, $drive) = @_; > + > + # exit early > + return if !$issharedonly; > + > + return if $drive->{file} eq 'none'; # cdrom with no file > + my $sid = PVE::Storage::parse_volume_id($drive->{file}); I guess this does not work for absolute paths (e.g. /dev/sda5) > + my $storage = PVE::Storage::storage_config($scfg, $sid); ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [RFC cluster] datacenter.cfg: do not use 'default_key' in migration format
> data/PVE/Cluster.pm | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/data/PVE/Cluster.pm b/data/PVE/Cluster.pm > index 09ed5d7..0edcc0a 100644 > --- a/data/PVE/Cluster.pm > +++ b/data/PVE/Cluster.pm > @@ -1323,7 +1323,7 @@ sub ssh_merge_known_hosts { > > my $migration_format = { > type => { > - default_key => 1, I think the default key is a good thing - I would not remove that. > + optional => 1, But yes, the whole value should be optional. > type => 'string', > enum => ['secure', 'insecure'], > description => "Migration traffic is encrypted using an SSH tunnel by " > . > -- > 2.1.4 > > > ___ > pve-devel mailing list > pve-devel@pve.proxmox.com > http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] Latest upgrade in enterprise repro
Hi all, I am sorry but I have to say that the latest upgrade in enterprise repo is a disaster. Migration after upgrade is not possible and migration before upgrade is not possible either! To not be able to migrate after upgrade was expected but not been able to migrate before upgrade was certainly a big surprise. This means upgrading is impossible without downtime. I did expect to see a big warning poster on the forum. Log from migrate before upgrade: With: migration_unsecure: 1 Dec 01 22:35:52 starting migration of VM 156 to node 'esx1' (10.0.0.1) Dec 01 22:35:52 copying disk images Dec 01 22:35:52 starting VM 156 on remote node 'esx1' Dec 01 22:35:55 starting online/live migration on tcp:localhost:60001 Dec 01 22:35:55 migrate_set_speed: 8589934592 Dec 01 22:35:55 migrate_set_downtime: 0.1 Dec 01 22:35:55 set migration_caps Dec 01 22:35:55 set cachesize: 107374182 Dec 01 22:35:55 spice client_migrate_info Dec 01 22:35:55 start migrate command to tcp:localhost:60001 Dec 01 22:35:57 migration status error: failed Dec 01 22:35:57 ERROR: online migrate failure - aborting Dec 01 22:35:57 aborting phase 2 - cleanup resources Dec 01 22:35:57 migrate_cancel Dec 01 22:35:58 ERROR: migration finished with problems (duration 00:00:06) TASK ERROR: migration problems Without migration_unsecure # qm migrate 156 esx1 --online Dec 01 22:42:27 starting migration of VM 156 to node 'esx1' (10.0.0.1) Dec 01 22:42:27 copying disk images Dec 01 22:42:27 starting VM 156 on remote node 'esx1' Dec 01 22:42:30 start remote tunnel Dec 01 22:42:31 starting online/live migration on unix:/run/qemu-server/156.migrate Dec 01 22:42:31 migrate_set_speed: 8589934592 Dec 01 22:42:31 migrate_set_downtime: 0.1 Dec 01 22:42:31 set migration_caps Dec 01 22:42:31 set cachesize: 107374182 Dec 01 22:42:31 spice client_migrate_info Dec 01 22:42:31 start migrate command to unix:/run/qemu-server/156.migrate Dec 01 22:42:33 migration status: active (transferred 625774775, remaining 313741312), total 1208827904) Dec 01 22:42:33 migration xbzrle cachesize: 67108864 transferred 0 pages 0 cachemiss 0 overflow 0 Dec 01 22:42:35 migration speed: 256.00 MB/s - downtime 48 ms Dec 01 22:42:35 migration status: completed Dec 01 22:42:37 ERROR: VM 156 not running Dec 01 22:42:37 ERROR: command '/usr/bin/ssh -o 'BatchMode=yes' root@10.0.0.1 qm resume 156 --skiplock --nocheck' failed: exit code 2 Dec 01 22:42:37 Waiting for spice server migration Dec 01 22:42:39 ERROR: migration finished with problems (duration 00:00:12) migration problems Migration succeeded but was not able to start the VM on the other node. -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E mir datanom net http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C mir miras org http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917 -- /usr/games/fortune -es says: Q: What do you call 15 blondes in a circle? A: A dope ring. Q: Why do blondes put their hair in ponytails? A: To cover up the valve stem. pgpRN1_KwQ48u.pgp Description: OpenPGP digital signature ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH] add migration chapters to the docs and to the gui
this series adds information about migration to the docs (including container restart mode and the migrateall filters) and add the help buttons to the gui at the right places ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH manager] add help buttons to migration windows
Signed-off-by: Dominik Csapak--- www/manager6/window/Migrate.js| 20 +++- www/manager6/window/MigrateAll.js | 8 +++- 2 files changed, 26 insertions(+), 2 deletions(-) diff --git a/www/manager6/window/Migrate.js b/www/manager6/window/Migrate.js index 5f76589..108fd16 100644 --- a/www/manager6/window/Migrate.js +++ b/www/manager6/window/Migrate.js @@ -94,6 +94,24 @@ Ext.define('PVE.window.Migrate', { } }); + var helpBtnConfig = {}; + + if (me.vmtype === 'qemu') { + helpBtnConfig = { + onlineHelp: 'qm_migration', + hidden: false, + listenToGlobalEvent: false + }; + } else if (me.vmtype === 'lxc') { + helpBtnConfig = { + onlineHelp: 'pct_migration', + hidden: false, + listenToGlobalEvent: false + }; + } + + var helpBtn = Ext.create('PVE.button.Help',helpBtnConfig); + Ext.apply(me, { title: gettext('Migrate') + ((me.vmtype === 'qemu')?' VM ':' CT ') + me.vmid, width: 350, @@ -101,7 +119,7 @@ Ext.define('PVE.window.Migrate', { layout: 'auto', border: false, items: [ me.formPanel ], - buttons: [ submitBtn ] + buttons: [ helpBtn, '->', submitBtn ] }); me.callParent(); diff --git a/www/manager6/window/MigrateAll.js b/www/manager6/window/MigrateAll.js index 4f97144..919328a 100644 --- a/www/manager6/window/MigrateAll.js +++ b/www/manager6/window/MigrateAll.js @@ -79,6 +79,12 @@ Ext.define('PVE.window.MigrateAll', { } }); + var helpBtn = Ext.create('PVE.button.Help', { + onlineHelp: 'pvecm_migrate_all', + hidden: false, + listenToGlobalEvent: false + }); + Ext.apply(me, { title: "Migrate All VMs", width: 450, @@ -86,7 +92,7 @@ Ext.define('PVE.window.MigrateAll', { layout: 'auto', border: false, items: [ me.formPanel ], - buttons: [ submitBtn ] + buttons: [ helpBtn, '->', submitBtn ] }); me.callParent(); -- 2.1.4 ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH docs 3/3] add migrate all chapter to pvecm.adoc
this explains how to migrate all vms off a node, and what the options are Signed-off-by: Dominik Csapak--- pvecm.adoc | 23 +++ 1 file changed, 23 insertions(+) diff --git a/pvecm.adoc b/pvecm.adoc index a8f017c..10cd7fe 100644 --- a/pvecm.adoc +++ b/pvecm.adoc @@ -991,6 +991,29 @@ migration: secure,network=10.1.2.0/24 NOTE: The migration type must always be set when the migration network gets set in `/etc/pve/datacenter.cfg`. +[[pvecm_migrate_all]] +Migrate All +~~~ + +You can migrate all Containers and VMs from one node to another, with +the api call /nodes/NODENAME/migrateall. This can be done either via +the pvesh: + + pvesh create /nodes/NODENAME/migrateall + +or via the webgui under the 'More' button. + +There are 3 Options with this 'Migrate All' Action: + +* maxworkers: defines the maximum simultaneous migrations which the node will + execute (e.g. maxworkers=3 will always migrate 3 VMs/Container at the same time) +* onlineonly: if this is true, the node will only try to migrate the VMs + and Containers which are online +* sharedonly: if this is true, the node will only try to migrate the VMs and + Containers which have only resources on shared storage + +CAUTION: Online Containers will be migrated in restart mode, so to avoid long +Downtime of Containers with local storage, use the sharedonly flag. ifdef::manvolnum[] include::pve-copyright.adoc[] -- 2.1.4 ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH docs 2/3] add migration subchapter to qm.adoc
explain online and offline migration Signed-off-by: Dominik Csapak--- qm.adoc | 16 1 file changed, 16 insertions(+) diff --git a/qm.adoc b/qm.adoc index b9b7ca1..c133f7b 100644 --- a/qm.adoc +++ b/qm.adoc @@ -512,6 +512,22 @@ Same as above, but only wait for 40 seconds. qm shutdown 300 && qm wait 300 -timeout 40 +[[qm_migration]] +Migration +- + +If you have a cluster, you can migrate your VM to another host with + + qm migrate + +When your VM is running and it has no local resources defined (such as disks +on local storage, passed through devices, etc.) you can initiate a live +migration with the -online flag. + +If you have local resources, you can still offline migrate your VMs, +as long as all disk are on storages, which are defined on both hosts. +Then the migration will copy the disk over the network to the target host. + [[qm_configuration]] Configuration -- 2.1.4 ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH docs 1/3] add migration subchapter to pct.adoc
explain the offline and restart migration Signed-off-by: Dominik Csapak--- pct.adoc | 20 1 file changed, 20 insertions(+) diff --git a/pct.adoc b/pct.adoc index 171a4a5..4ca8d7e 100644 --- a/pct.adoc +++ b/pct.adoc @@ -692,6 +692,26 @@ NOTE: If you have changed the container's configuration since the last start attempt with `pct start`, you need to run `pct start` at least once to also update the configuration used by `lxc-start`. +[[pct_migration]] +Migration +- + +If you have a cluster, you can migrate your Containers with + + pct migrate + +This works as long as your Container is offline. If it has local volumes or +mountpoints defined, the migration will copy the content over the network to +the target host if there is the same storage defined. + +If you want to migrate online Containers, the only way is to use +restart migration. This can be initiated with the -restart flag and the optional +-timeout parameter. + +A restart migration will shut down the Container and kill it after the specified +timeout (the default is 180 seconds). Then it will migrate the Container +like an offline migration and when finished, it starts the Container on the +target node. [[pct_configuration]] Configuration -- 2.1.4 ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH manager 3/3] add onlineonly/sharedonly flags to migrate all gui
Signed-off-by: Dominik Csapak--- www/manager6/window/MigrateAll.js | 24 +--- 1 file changed, 17 insertions(+), 7 deletions(-) diff --git a/www/manager6/window/MigrateAll.js b/www/manager6/window/MigrateAll.js index 1bcba1e..4f97144 100644 --- a/www/manager6/window/MigrateAll.js +++ b/www/manager6/window/MigrateAll.js @@ -3,10 +3,10 @@ Ext.define('PVE.window.MigrateAll', { resizable: false, -migrate: function(target, maxworkers) { +migrate: function(params) { var me = this; PVE.Utils.API2Request({ - params: { target: target, maxworkers: maxworkers}, + params: params, url: '/nodes/' + me.nodename + '/' + "/migrateall", waitMsgTarget: me, method: 'POST', @@ -36,7 +36,7 @@ Ext.define('PVE.window.MigrateAll', { bodyPadding: 10, border: false, fieldDefaults: { - labelWidth: 100, + labelWidth: 200, anchor: '100%' }, items: [ @@ -45,6 +45,7 @@ Ext.define('PVE.window.MigrateAll', { name: 'target', fieldLabel: 'Target node', allowBlank: false, + disallowedNodes: [me.nodename], onlineValidator: true }, { @@ -53,8 +54,18 @@ Ext.define('PVE.window.MigrateAll', { minValue: 1, maxValue: 100, value: 1, - fieldLabel: 'Parallel jobs', + fieldLabel: gettext('Parallel Migrations'), allowBlank: false + }, + { + xtype: 'pvecheckbox', + name: 'onlineonly', + fieldLabel: gettext('Online Guests only') + }, + { + xtype: 'pvecheckbox', + name: 'sharedonly', + fieldLabel: gettext('Shared Storage only') } ] }); @@ -64,14 +75,13 @@ Ext.define('PVE.window.MigrateAll', { var submitBtn = Ext.create('Ext.Button', { text: 'Migrate', handler: function() { - var values = form.getValues(); - me.migrate(values.target, values.maxworkers); + me.migrate(form.getValues()); } }); Ext.apply(me, { title: "Migrate All VMs", - width: 350, + width: 450, modal: true, layout: 'auto', border: false, -- 2.1.4 ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH manager 2/3] change lxc migration to restart mode
since we do not have 'online' migration for ct, we just use the checkbox for restart migration Signed-off-by: Dominik Csapak--- www/manager6/window/Migrate.js | 14 -- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/www/manager6/window/Migrate.js b/www/manager6/window/Migrate.js index 23e1e5b..5f76589 100644 --- a/www/manager6/window/Migrate.js +++ b/www/manager6/window/Migrate.js @@ -5,8 +5,18 @@ Ext.define('PVE.window.Migrate', { migrate: function(target, online) { var me = this; + + var params = { + target: target + }; + + if (me.vmtype === 'qemu') { + params.online = online; + } else { + params.restart = online; + } PVE.Utils.API2Request({ - params: { target: target, online: online }, + params: params, url: '/nodes/' + me.nodename + '/' + me.vmtype + '/' + me.vmid + "/migrate", waitMsgTarget: me, method: 'POST', @@ -69,7 +79,7 @@ Ext.define('PVE.window.Migrate', { uncheckedValue: 0, defaultValue: 0, checked: running, - fieldLabel: gettext('Online') + fieldLabel: me.vmtype === 'qemu' ? gettext('Online') : gettext('Restart Mode') } ] }); -- 2.1.4 ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH manager 1/3] add onlineonly and sharedonly to migrateall api call
this adds two optional filters to the migrateall api call: onlineonly: migrates only vms/ct which are online, ct now in restart mode sharedonly: migrates only vms/ct which have all disks/volumes/mp on shared storage or marked as shared Signed-off-by: Dominik Csapak--- PVE/API2/Nodes.pm | 35 ++- 1 file changed, 30 insertions(+), 5 deletions(-) diff --git a/PVE/API2/Nodes.pm b/PVE/API2/Nodes.pm index 457263a..8fe825f 100644 --- a/PVE/API2/Nodes.pm +++ b/PVE/API2/Nodes.pm @@ -1211,11 +1211,17 @@ __PACKAGE__->register_method({ }}); my $get_start_stop_list = sub { -my ($nodename, $autostart) = @_; +my ($nodename, $autostart, $sharedonly) = @_; my $vmlist = PVE::Cluster::get_vmlist(); my $resList = {}; + +# read storage config only one time +my $scfg; +if ($sharedonly) { + $scfg = PVE::Storage::config(); +} foreach my $vmid (keys %{$vmlist->{ids}}) { my $d = $vmlist->{ids}->{$vmid}; my $startup; @@ -1228,12 +1234,15 @@ my $get_start_stop_list = sub { my $conf; if ($d->{type} eq 'lxc') { $conf = PVE::LXC::Config->load_config($vmid); + return if $sharedonly && !PVE::LXC::Config->is_shared_only($conf, $scfg); } elsif ($d->{type} eq 'qemu') { $conf = PVE::QemuConfig->load_config($vmid); + return if $sharedonly && !PVE::QemuConfig->is_shared_only($conf, $scfg); } else { die "unknown VM type '$d->{type}'\n"; } + return if $autostart && !$conf->{onboot}; if ($conf->{startup}) { @@ -1493,16 +1502,22 @@ __PACKAGE__->register_method ({ }}); my $create_migrate_worker = sub { -my ($nodename, $type, $vmid, $target) = @_; +my ($nodename, $type, $vmid, $target, $onlineonly) = @_; my $upid; if ($type eq 'lxc') { my $online = PVE::LXC::check_running($vmid) ? 1 : 0; + + return undef if $onlineonly && !$online; + print STDERR "Migrating CT $vmid\n"; $upid = PVE::API2::LXC->migrate_vm({node => $nodename, vmid => $vmid, target => $target, - online => $online }); + restart => $online }); } elsif ($type eq 'qemu') { my $online = PVE::QemuServer::check_running($vmid, 1) ? 1 : 0; + + return undef if $onlineonly && !$online; + print STDERR "Migrating VM $vmid\n"; $upid = PVE::API2::Qemu->migrate_vm({node => $nodename, vmid => $vmid, target => $target, online => $online }); @@ -1538,6 +1553,16 @@ __PACKAGE__->register_method ({ type => 'integer', minimum => 1 }, + sharedonly => { + description => "Migrate only those guests with only shared storage", + optional => 1, + type => 'boolean' + }, + onlineonly => { + description => "Migrate only those guests with only shared storage", + optional => 1, + type => 'boolean' + }, }, }, returns => { @@ -1563,7 +1588,7 @@ __PACKAGE__->register_method ({ $rpcenv->{type} = 'priv'; # to start tasks in background - my $migrateList = &$get_start_stop_list($nodename); + my $migrateList = &$get_start_stop_list($nodename, undef, $param->{sharedonly}); foreach my $order (sort {$b <=> $a} keys %$migrateList) { my $vmlist = $migrateList->{$order}; @@ -1571,7 +1596,7 @@ __PACKAGE__->register_method ({ foreach my $vmid (sort {$b <=> $a} keys %$vmlist) { my $d = $vmlist->{$vmid}; my $pid; - eval { $pid = &$create_migrate_worker($nodename, $d->{type}, $vmid, $target); }; + eval { $pid = &$create_migrate_worker($nodename, $d->{type}, $vmid, $target, $param->{onlineonly}); }; warn $@ if $@; next if !$pid; -- 2.1.4 ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH container] implement is_shared_only check
this iterates all mountpoints and checks if they are on a shared storage, or marked as 'shared' Signed-off-by: Dominik Csapak--- src/PVE/LXC/Config.pm | 25 + 1 file changed, 25 insertions(+) diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm index d4d973b..ba5c9ac 100644 --- a/src/PVE/LXC/Config.pm +++ b/src/PVE/LXC/Config.pm @@ -75,6 +75,31 @@ sub has_feature { return $err ? 0 : 1; } +sub is_shared_only { +my ($class, $conf, $scfg) = @_; + +my $issharedonly = 1; +$class->foreach_mountpoint($conf, sub { + my ($ms, $mountpoint) = @_; + + # exit early + return if !$issharedonly; + + if ($mountpoint->{type} ne 'volume') { + $issharedonly = 0 if !$mountpoint->{shared}; + return; + } + + my $sid = PVE::Storage::parse_volume_id($mountpoint->{volume}); + my $storage = PVE::Storage::storage_config($scfg, $sid); + if (!$storage->{shared}) { + $issharedonly = 0; + } +}); + +return $issharedonly; +} + sub __snapshot_save_vmstate { my ($class, $vmid, $conf, $snapname, $storecfg) = @_; die "implement me - snapshot_save_vmstate\n"; -- 2.1.4 ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH] add filters to migrateall
this series adds 2 filters to migrateall sharedonly and onlineonly, details are in the commit message we also change the gui so it shows these two options, and we change the lxc migration to restart mode ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH common] add abstract is_shared_only check
this must be implemented in the Migration subclasses, and is a check if all disks/mountpoints/etc. are marked as shared this will be used for the migrate all call, where we can then migrate the guests with only shared storage Signed-off-by: Dominik Csapak--- src/PVE/AbstractConfig.pm | 6 ++ 1 file changed, 6 insertions(+) diff --git a/src/PVE/AbstractConfig.pm b/src/PVE/AbstractConfig.pm index ac12fd1..c246cc5 100644 --- a/src/PVE/AbstractConfig.pm +++ b/src/PVE/AbstractConfig.pm @@ -190,6 +190,12 @@ sub has_feature { die "implement me - abstract method\n"; } +# checks whether the all volumes are on a shared storage or not +sub is_shared_only { +my ($class, $conf, $scfg) = @_; +die "implement me = abstract method\n"; +} + # Internal snapshots # NOTE: Snapshot create/delete involves several non-atomic -- 2.1.4 ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH qemu-server] implement is_shared_only check
here we check if every disk is on a storage marked as shared, or the cdrom has the value 'none' Signed-off-by: Dominik Csapak--- PVE/QemuConfig.pm | 21 + 1 file changed, 21 insertions(+) diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm index 692bba8..15df6ff 100644 --- a/PVE/QemuConfig.pm +++ b/PVE/QemuConfig.pm @@ -63,6 +63,27 @@ sub has_feature { return $err ? 0 : 1; } +sub is_shared_only { +my ($class, $conf, $scfg) = @_; + +my $issharedonly = 1; +PVE::QemuServer::foreach_drive($conf, sub { + my ($ds, $drive) = @_; + + # exit early + return if !$issharedonly; + + return if $drive->{file} eq 'none'; # cdrom with no file + my $sid = PVE::Storage::parse_volume_id($drive->{file}); + my $storage = PVE::Storage::storage_config($scfg, $sid); + if (!$storage->{shared}) { + $issharedonly = 0; + } + }); + +return $issharedonly; +} + sub __snapshot_save_vmstate { my ($class, $vmid, $conf, $snapname, $storecfg) = @_; -- 2.1.4 ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH container 2/2] add restart migration to lxc api
this simply adds the restart flag and the optional timeout to the lxc api required for the restart mode migration Signed-off-by: Dominik Csapak--- src/PVE/API2/LXC.pm | 20 +--- 1 file changed, 17 insertions(+), 3 deletions(-) diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm index 38b1feb..3c02398 100644 --- a/src/PVE/API2/LXC.pm +++ b/src/PVE/API2/LXC.pm @@ -845,6 +845,17 @@ __PACKAGE__->register_method({ description => "Use online/live migration.", optional => 1, }, + restart => { + type => 'boolean', + description => "Use restart migration", + optional => 1, + }, + timeout => { + type => 'integer', + description => "Timeout in seconds for shutdown for restart migration", + optional => 1, + default => 60 + }, force => { type => 'boolean', description => "Force migration despite local bind / device" . @@ -880,10 +891,13 @@ __PACKAGE__->register_method({ # test if VM exists PVE::LXC::Config->load_config($vmid); + my $isrunning = PVE::LXC::check_running($vmid); # try to detect errors early - if (PVE::LXC::check_running($vmid)) { - die "can't migrate running container without --online\n" - if !$param->{online}; + if ($isrunning) { + die "lxc live migration not implemented\n" + if $param->{online}; + die "running container needs restart mode for migration\n" + if !$param->{restart}; } if (PVE::HA::Config::vm_is_ha_managed($vmid) && $rpcenv->{type} ne 'ha') { -- 2.1.4 ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH container 1/2] implement lxc restart migration
this checks for the 'restart' parameter and if given, shuts down the container with a optionally defined timeout (default 180s), and continues the migration like an offline one. after finishing, we start the container on the target node Signed-off-by: Dominik Csapak--- src/PVE/LXC/Migrate.pm | 38 +++--- 1 file changed, 31 insertions(+), 7 deletions(-) diff --git a/src/PVE/LXC/Migrate.pm b/src/PVE/LXC/Migrate.pm index 73af6d4..137a3ee 100644 --- a/src/PVE/LXC/Migrate.pm +++ b/src/PVE/LXC/Migrate.pm @@ -23,6 +23,7 @@ sub prepare { my ($self, $vmid) = @_; my $online = $self->{opts}->{online}; +my $restart= $self->{opts}->{restart}; $self->{storecfg} = PVE::Storage::config(); @@ -31,12 +32,10 @@ sub prepare { PVE::LXC::Config->check_lock($conf); -my $running = 0; -if (PVE::LXC::check_running($vmid)) { - die "lxc live migration is currently not implemented\n"; - - die "can't migrate running container without --online\n" if !$online; - $running = 1; +my $running = PVE::LXC::check_running($vmid); +if ($running) { + die "lxc live migration is currently not implemented\n" if $online; + die "can't migrate running container without --restart\n" if !$restart; } my $force = $self->{opts}->{force} // 0; @@ -78,8 +77,9 @@ sub prepare { # only activate if not shared push @$need_activate, $volid; + # unless in restart mode because we shut the container down die "unable to migrate local mount point '$volid' while CT is running" - if $running; + if $running && !$restart; } }); @@ -93,6 +93,22 @@ sub prepare { eval { $self->cmd_quiet($cmd); }; die "Can't connect to destination address using public key\n" if $@; +# in restart mode, we shutdown the container before migrating +if ($restart && $running) { + my $timeout = $self->{opts}->{timeout} // 180; + + $self->log('info', "shutdown CT $vmid\n"); + + my $cmd = ['lxc-stop', '-n', $vmid, '--timeout', $timeout]; + $self->cmd($cmd, timeout => $timeout + 5); + + # make sure container is stopped + $cmd = ['lxc-wait', '-n', $vmid, '-t', 5, '-s', 'STOPPED']; + $self->cmd($cmd); + + $running = 0; +} + return $running; } @@ -322,6 +338,14 @@ sub final_cleanup { my $cmd = [ @{$self->{rem_ssh}}, 'pct', 'unlock', $vmid ]; $self->cmd_logerr($cmd, errmsg => "failed to clear migrate lock"); } + +# in restart mode, we start the container on the target node +# after migration +if ($self->{opts}->{restart}) { + $self->log('info', "start container on target node"); + my $cmd = [ @{$self->{rem_ssh}}, 'pct', 'start', $vmid]; + $self->cmd($cmd); +} } 1; -- 2.1.4 ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH] add lxc restart migration
this series adds container restart migration. it shuts down running container, migrates it offline, and restarts it on the target node in case of shared storage, this is almost instantaneous ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH 2/6] qemu_drive_mirror : handle multiple jobs
>>We currently prepare for >>the 4.4 release, but when that is done we can start adding new features >>like local disk live migration. Ok Great ! - Mail original - De: "dietmar"À: "aderumier" , "pve-devel" Envoyé: Jeudi 1 Décembre 2016 16:30:22 Objet: Re: [pve-devel] [PATCH 2/6] qemu_drive_mirror : handle multiple jobs > Any news for local disk live migration ? > > I have a time to work on it this month if cleanup is needed, but I'll be very > busy at the begin of next year. AFAIR the patch looks already quite good. We currently prepare for the 4.4 release, but when that is done we can start adding new features like local disk live migration. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH 2/6] qemu_drive_mirror : handle multiple jobs
> Any news for local disk live migration ? > > I have a time to work on it this month if cleanup is needed, but I'll be very > busy at the begin of next year. AFAIR the patch looks already quite good. We currently prepare for the 4.4 release, but when that is done we can start adding new features like local disk live migration. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH manager 1/3] Add optional parameter panelOnlineHelp
> On December 1, 2016 at 2:16 PM Emmanuel Kasperwrote: > > > this widget and its containing InputPanel can be used by both > QEMU/LXC so we need to be able to pass the onlineHelp as > the instantiation step > --- > www/manager6/qemu/StartupEdit.js | 7 ++- > 1 file changed, 6 insertions(+), 1 deletion(-) > > diff --git a/www/manager6/qemu/StartupEdit.js > b/www/manager6/qemu/StartupEdit.js > index 3c832c5..8ed45a2 100644 > --- a/www/manager6/qemu/StartupEdit.js > +++ b/www/manager6/qemu/StartupEdit.js > @@ -56,13 +56,18 @@ Ext.define('PVE.qemu.StartupInputPanel', { > > Ext.define('PVE.qemu.StartupEdit', { > extend: 'PVE.window.Edit', > +alias: 'widget.pveQemuStartupEdit', > +panelOnlineHelp: undefined, Is it really necessary to use a new name (panelOnlineHelp instead of OnlineHelp)? ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [RFC cluster] datacenter.cfg: do not use 'default_key' in migration format
'default_key' does not really make sense here, as the migration format properties can be set independent of each other and can be both optional. Signed-off-by: Thomas Lamprecht--- This breaks backwards compatibility in setups which used the default_key format already. I could add some code which handles the break we could also just wait for 5.X where some breakages are expected and can be handled at once. data/PVE/Cluster.pm | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/data/PVE/Cluster.pm b/data/PVE/Cluster.pm index 09ed5d7..0edcc0a 100644 --- a/data/PVE/Cluster.pm +++ b/data/PVE/Cluster.pm @@ -1323,7 +1323,7 @@ sub ssh_merge_known_hosts { my $migration_format = { type => { - default_key => 1, + optional => 1, type => 'string', enum => ['secure', 'insecure'], description => "Migration traffic is encrypted using an SSH tunnel by " . -- 2.1.4 ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH 2/6] qemu_drive_mirror : handle multiple jobs
Any news for local disk live migration ? I have a time to work on it this month if cleanup is needed, but I'll be very busy at the begin of next year. - Mail original - De: "aderumier"À: "pve-devel" Envoyé: Mercredi 16 Novembre 2016 12:10:05 Objet: Re: [pve-devel] [PATCH 2/6] qemu_drive_mirror : handle multiple jobs Wolfgang, any comment ? - Mail original - De: "aderumier" À: "Wolfgang Bumiller" Cc: "pve-devel" Envoyé: Jeudi 10 Novembre 2016 15:47:47 Objet: Re: [pve-devel] [PATCH 2/6] qemu_drive_mirror : handle multiple jobs >>Well, yes, but what happens when the connection fails or gets interrupted? >>A vanishing connection (timeout) as well as when the connection gets killed >>for some reason (eg. tcpkill) need to be handled in the >>qemu_drive_mirror_monitor() functions properly. a running mirroring job auto abort if network fail. (or any write/read error on source/destination). This check was more for the connection. the drive-mirror hang when trying to establish the connection if the target not responding. (and I'm not sure about it they are a timeout, as I have wait some minutes without response) - Mail original - De: "Wolfgang Bumiller" À: "aderumier" Cc: "pve-devel" Envoyé: Jeudi 10 Novembre 2016 14:43:15 Objet: Re: [pve-devel] [PATCH 2/6] qemu_drive_mirror : handle multiple jobs > On November 10, 2016 at 1:54 PM Alexandre DERUMIER > wrote: > > > > + die "can't connect remote nbd server $server:$port" if > > !PVE::Network::tcp_ping($server, $port, 2); > >>I'm not all too happy about this check here as it is a TOCTOU race. > >>(I'm not happy about some the other uses of it as well, but some are > >>only for status quieries, iow. to display information, where it's fine.) > > >>However, in this case, if broken/missing connections can still not be > >>caught (like in my previous tests), then this only hides 99.99% of the > >>cases while still wrongly deleting disks in the other 0.01%, which is > >>unacceptable behavior. > > So, do you want that I remove the check ? Well, yes, but what happens when the connection fails or gets interrupted? A vanishing connection (timeout) as well as when the connection gets killed for some reason (eg. tcpkill) need to be handled in the qemu_drive_mirror_monitor() functions properly. > > > >>$jobs is still empty at this point. The assignment below should be moved > >>up. > > > + die "mirroring error: $err"; > > + } > > + > > + $jobs->{"drive-$drive"} = {}; > >>This one ^ > > Ok. > > Thanks for the review! > > > > - Mail original - > De: "Wolfgang Bumiller" > À: "aderumier" > Cc: "pve-devel" > Envoyé: Jeudi 10 Novembre 2016 13:21:00 > Objet: Re: [pve-devel] [PATCH 2/6] qemu_drive_mirror : handle multiple jobs > > On Tue, Nov 08, 2016 at 04:29:30AM +0100, Alexandre Derumier wrote: > > we can use multiple drive_mirror in parralel. > > > > block-job-complete can be skipped, if we want to add more mirror job later. > > > > also add support for nbd uri to qemu_drive_mirror > > > > Signed-off-by: Alexandre Derumier > > --- > > PVE/QemuServer.pm | 171 > > +++--- > > 1 file changed, 123 insertions(+), 48 deletions(-) > > > > diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm > > index 54edd96..e989670 100644 > > --- a/PVE/QemuServer.pm > > +++ b/PVE/QemuServer.pm > > @@ -5824,91 +5824,165 @@ sub qemu_img_format { > > } > > > > sub qemu_drive_mirror { > > - my ($vmid, $drive, $dst_volid, $vmiddst, $is_zero_initialized) = @_; > > + my ($vmid, $drive, $dst_volid, $vmiddst, $is_zero_initialized, $jobs, > > $skipcomplete) = @_; > > > > - my $storecfg = PVE::Storage::config(); > > - my ($dst_storeid, $dst_volname) = > > PVE::Storage::parse_volume_id($dst_volid); > > + $jobs = {} if !$jobs; > > + > > + my $qemu_target; > > + my $format; > > > > - my $dst_scfg = PVE::Storage::storage_config($storecfg, $dst_storeid); > > + if($dst_volid =~ /^nbd:(localhost|[\d\.]+|\[[\d\.:a-fA-F]+\]):(\d+)/) { > > + $qemu_target = $dst_volid; > > + my $server = $1; > > + my $port = $2; > > + $format = "nbd"; > > + die "can't connect remote nbd server $server:$port" if > > !PVE::Network::tcp_ping($server, $port, 2); > > I'm not all too happy about this check here as it is a TOCTOU race. > (I'm not happy about some the other uses of it as well, but some are > only for status quieries, iow. to display information, where it's fine.) > > However, in this case, if broken/missing connections can still not be > caught (like in my previous tests), then this only hides 99.99% of the > cases
[pve-devel] [PATCH pve-container] Do not skip unprivileged config parameter when restoring a container
This allows to keep the parameter when restoring from a backup. --- src/PVE/LXC/Create.pm | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/PVE/LXC/Create.pm b/src/PVE/LXC/Create.pm index 1f1ece1..11bc00d 100644 --- a/src/PVE/LXC/Create.pm +++ b/src/PVE/LXC/Create.pm @@ -114,7 +114,7 @@ sub restore_configuration { my $oldconf = PVE::LXC::Config::parse_pct_config("/lxc/$vmid.conf", $raw); foreach my $key (keys %$oldconf) { - next if $key eq 'digest' || $key eq 'rootfs' || $key eq 'snapshots' || $key eq 'unprivileged' || $key eq 'parent'; + next if $key eq 'digest' || $key eq 'rootfs' || $key eq 'snapshots' || $key eq 'parent'; next if $key =~ /^mp\d+$/; # don't recover mountpoints next if $key =~ /^unused\d+$/; # don't recover unused disks if ($restricted && $key eq 'lxc') { -- 2.1.4 ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH manager 1/3] Add optional parameter panelOnlineHelp
this widget and its containing InputPanel can be used by both QEMU/LXC so we need to be able to pass the onlineHelp as the instantiation step --- www/manager6/qemu/StartupEdit.js | 7 ++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/www/manager6/qemu/StartupEdit.js b/www/manager6/qemu/StartupEdit.js index 3c832c5..8ed45a2 100644 --- a/www/manager6/qemu/StartupEdit.js +++ b/www/manager6/qemu/StartupEdit.js @@ -56,13 +56,18 @@ Ext.define('PVE.qemu.StartupInputPanel', { Ext.define('PVE.qemu.StartupEdit', { extend: 'PVE.window.Edit', +alias: 'widget.pveQemuStartupEdit', +panelOnlineHelp: undefined, initComponent : function() { /*jslint confusion: true */ var me = this; - var ipanel = Ext.create('PVE.qemu.StartupInputPanel', {}); + var ipanelConfig = me.panelOnlineHelp ? {onlineHelp: me.panelOnlineHelp} : + {}; + + var ipanel = Ext.create('PVE.qemu.StartupInputPanel', ipanelConfig); Ext.applyIf(me, { subject: gettext('Start/Shutdown order'), -- 2.1.4 ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH manager 2/3] Pass a panelOnlineHelp to get a help button for LXC Start/Shutdown options
--- www/manager6/lxc/Options.js | 6 -- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/www/manager6/lxc/Options.js b/www/manager6/lxc/Options.js index 2df93a9..7670342 100644 --- a/www/manager6/lxc/Options.js +++ b/www/manager6/lxc/Options.js @@ -42,8 +42,10 @@ Ext.define('PVE.lxc.Options', { header: gettext('Start/Shutdown order'), defaultValue: '', renderer: PVE.Utils.render_kvm_startup, - editor: caps.vms['VM.Config.Options'] && caps.nodes['Sys.Modify'] ? - 'PVE.qemu.StartupEdit' : undefined + editor: caps.vms['VM.Config.Options'] && caps.nodes['Sys.Modify'] ? { + xtype: 'pveQemuStartupEdit', + panelOnlineHelp: 'pct_startup_and_shutdown' + } : undefined }, ostype: { header: gettext('OS Type'), -- 2.1.4 ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH manager 3/3] Remove last extra comma
also whitespace / indent cleanup --- www/manager6/lxc/Options.js | 34 +- 1 file changed, 17 insertions(+), 17 deletions(-) diff --git a/www/manager6/lxc/Options.js b/www/manager6/lxc/Options.js index 7670342..9edcb05 100644 --- a/www/manager6/lxc/Options.js +++ b/www/manager6/lxc/Options.js @@ -121,28 +121,28 @@ Ext.define('PVE.lxc.Options', { } } : undefined }, -protection: { -header: gettext('Protection'), -defaultValue: false, -renderer: PVE.Utils.format_boolean, -editor: caps.vms['VM.Config.Options'] ? { -xtype: 'pveWindowEdit', -subject: gettext('Protection'), -items: { -xtype: 'pvecheckbox', -name: 'protection', -uncheckedValue: 0, -defaultValue: 0, -deleteDefaultValue: true, -fieldLabel: gettext('Enabled') -} -} : undefined + protection: { + header: gettext('Protection'), + defaultValue: false, + renderer: PVE.Utils.format_boolean, + editor: caps.vms['VM.Config.Options'] ? { + xtype: 'pveWindowEdit', + subject: gettext('Protection'), + items: { + xtype: 'pvecheckbox', + name: 'protection', + uncheckedValue: 0, + defaultValue: 0, + deleteDefaultValue: true, + fieldLabel: gettext('Enabled') + } + } : undefined }, unprivileged: { header: gettext('Unprivileged container'), renderer: PVE.Utils.format_boolean, defaultValue: 0 - }, + } }; var baseurl = 'nodes/' + nodename + '/lxc/' + vmid + '/config'; -- 2.1.4 ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH container] VZDump: implement stopwait
This was missing in pve-container, qemu-server does it already. Signed-off-by: Thomas Lamprecht--- src/PVE/VZDump/LXC.pm | 5 - 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/src/PVE/VZDump/LXC.pm b/src/PVE/VZDump/LXC.pm index 81f2588..46b5bd2 100644 --- a/src/PVE/VZDump/LXC.pm +++ b/src/PVE/VZDump/LXC.pm @@ -248,7 +248,10 @@ sub copy_data_phase2 { sub stop_vm { my ($self, $task, $vmid) = @_; -$self->cmd("lxc-stop -n $vmid"); +my $opts = $self->{vzdump}->{opts}; +my $timeout = $opts->{stopwait} * 60; + +$self->cmd("lxc-stop -n $vmid -t $timeout"); # make sure container is stopped $self->cmd("lxc-wait -n $vmid -s STOPPED"); -- 2.1.4 ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] applied: [PATCH kvm 1/2] savevm-async: fix possibly uninitialized variable
--- debian/patches/pve/0041-savevm-async-updates.patch | 22 +- 1 file changed, 13 insertions(+), 9 deletions(-) diff --git a/debian/patches/pve/0041-savevm-async-updates.patch b/debian/patches/pve/0041-savevm-async-updates.patch index 26df2fd..946aaa8 100644 --- a/debian/patches/pve/0041-savevm-async-updates.patch +++ b/debian/patches/pve/0041-savevm-async-updates.patch @@ -1,14 +1,14 @@ -From b7bac74dd970eba47961cdaac3b219a2de7668b5 Mon Sep 17 00:00:00 2001 +From f678278f0f4d311a0d9bfa087bd722aa7403df11 Mon Sep 17 00:00:00 2001 From: Wolfgang BumillerDate: Fri, 9 Sep 2016 15:21:19 +0200 -Subject: [PATCH 41/44] savevm-async updates +Subject: [PATCH 41/47] savevm-async updates --- - savevm-async.c | 76 +- - 1 file changed, 38 insertions(+), 38 deletions(-) + savevm-async.c | 79 +- + 1 file changed, 39 insertions(+), 40 deletions(-) diff --git a/savevm-async.c b/savevm-async.c -index 76cd8fa..af86cbd 100644 +index 76cd8fa..8c76137 100644 --- a/savevm-async.c +++ b/savevm-async.c @@ -20,6 +20,8 @@ @@ -159,7 +159,7 @@ index 76cd8fa..af86cbd 100644 } static const QEMUFileOps loadstate_file_ops = { -@@ -477,25 +477,25 @@ static const QEMUFileOps loadstate_file_ops = { +@@ -477,28 +477,27 @@ static const QEMUFileOps loadstate_file_ops = { int load_state_from_blockdev(const char *filename) { @@ -169,7 +169,8 @@ index 76cd8fa..af86cbd 100644 Error *blocker = NULL; QEMUFile *f; - int ret; +-int ret; ++int ret = -EINVAL; -bs = bdrv_new(); -ret = bdrv_open(, filename, NULL, NULL, 0, _err); @@ -191,8 +192,11 @@ index 76cd8fa..af86cbd 100644 +f = qemu_fopen_ops(be, _file_ops); if (!f) { error_report("Could not open VM state file"); - ret = -EINVAL; -@@ -516,10 +516,10 @@ int load_state_from_blockdev(const char *filename) +-ret = -EINVAL; + goto the_end; + } + +@@ -516,10 +515,10 @@ int load_state_from_blockdev(const char *filename) ret = 0; the_end: -- 2.1.4 ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] applied: [PATCH kvm 2/2] glusterfs: allow partial reads
--- Upstream bug: https://bugs.launchpad.net/qemu/+bug/1644754 This patch was also sent upstream as an RFC patch with an additional description. For now it should be good enough. If partial reads were an actual issue we'd see a lot more qemu-generated I/O errors with glusterfs. This is also more consistent with other block driver backends. .../pve/0047-glusterfs-allow-partial-reads.patch | 78 ++ debian/patches/series | 1 + 2 files changed, 79 insertions(+) create mode 100644 debian/patches/pve/0047-glusterfs-allow-partial-reads.patch diff --git a/debian/patches/pve/0047-glusterfs-allow-partial-reads.patch b/debian/patches/pve/0047-glusterfs-allow-partial-reads.patch new file mode 100644 index 000..12ba4b0 --- /dev/null +++ b/debian/patches/pve/0047-glusterfs-allow-partial-reads.patch @@ -0,0 +1,78 @@ +From e9a50006a7f86adacff211fbd98d5b3ad79f22ef Mon Sep 17 00:00:00 2001 +From: Wolfgang Bumiller+Date: Wed, 30 Nov 2016 10:27:47 +0100 +Subject: [PATCH 47/47] glusterfs: allow partial reads + +This should deal with qemu bug #1644754 until upstream +decides which way to go. The general direction seems to be +away from sector based block APIs and with that in mind, and +when comparing to other network block backends (eg. nfs) +treating partial reads as errors doesn't seem to make much +sense. +--- + block/gluster.c | 10 +- + 1 file changed, 9 insertions(+), 1 deletion(-) + +diff --git a/block/gluster.c b/block/gluster.c +index 6dcf926..17c51ed 100644 +--- a/block/gluster.c b/block/gluster.c +@@ -39,6 +39,7 @@ typedef struct GlusterAIOCB { + QEMUBH *bh; + Coroutine *coroutine; + AioContext *aio_context; ++bool is_write; + } GlusterAIOCB; + + typedef struct BDRVGlusterState { +@@ -623,8 +624,10 @@ static void gluster_finish_aiocb(struct glfs_fd *fd, ssize_t ret, void *arg) + acb->ret = 0; /* Success */ + } else if (ret < 0) { + acb->ret = -errno; /* Read/Write failed */ ++} else if (acb->is_write) { ++acb->ret = -EIO; /* Partial write - fail it */ + } else { +-acb->ret = -EIO; /* Partial read/write - fail it */ ++acb->ret = 0; /* Success */ + } + + acb->bh = aio_bh_new(acb->aio_context, qemu_gluster_complete_aio, acb); +@@ -861,6 +864,7 @@ static coroutine_fn int qemu_gluster_co_pwrite_zeroes(BlockDriverState *bs, + acb.ret = 0; + acb.coroutine = qemu_coroutine_self(); + acb.aio_context = bdrv_get_aio_context(bs); ++acb.is_write = true; + + ret = glfs_zerofill_async(s->fd, offset, size, gluster_finish_aiocb, ); + if (ret < 0) { +@@ -979,9 +983,11 @@ static coroutine_fn int qemu_gluster_co_rw(BlockDriverState *bs, + acb.aio_context = bdrv_get_aio_context(bs); + + if (write) { ++acb.is_write = true; + ret = glfs_pwritev_async(s->fd, qiov->iov, qiov->niov, offset, 0, + gluster_finish_aiocb, ); + } else { ++acb.is_write = false; + ret = glfs_preadv_async(s->fd, qiov->iov, qiov->niov, offset, 0, + gluster_finish_aiocb, ); + } +@@ -1044,6 +1050,7 @@ static coroutine_fn int qemu_gluster_co_flush_to_disk(BlockDriverState *bs) + acb.ret = 0; + acb.coroutine = qemu_coroutine_self(); + acb.aio_context = bdrv_get_aio_context(bs); ++acb.is_write = true; + + ret = glfs_fsync_async(s->fd, gluster_finish_aiocb, ); + if (ret < 0) { +@@ -1090,6 +1097,7 @@ static coroutine_fn int qemu_gluster_co_pdiscard(BlockDriverState *bs, + acb.ret = 0; + acb.coroutine = qemu_coroutine_self(); + acb.aio_context = bdrv_get_aio_context(bs); ++acb.is_write = true; + + ret = glfs_discard_async(s->fd, offset, size, gluster_finish_aiocb, ); + if (ret < 0) { +-- +2.1.4 + diff --git a/debian/patches/series b/debian/patches/series index 4ae72b0..bc87c7a 100644 --- a/debian/patches/series +++ b/debian/patches/series @@ -44,6 +44,7 @@ pve/0043-vma-sizes-passed-to-blk_co_preadv-should-be-bytes-no.patch pve/0044-glusterfs-daemonize.patch pve/0045-qmp_delete_drive_snapshot-add-aiocontext.patch pve/0046-convert-savevm-async-to-threads.patch +pve/0047-glusterfs-allow-partial-reads.patch #see https://bugs.launchpad.net/qemu/+bug/1488363?comments=all extra/x86-lapic-Load-LAPIC-state-at-post_load.patch extra/0001-Revert-target-i386-disable-LINT0-after-reset.patch -- 2.1.4 ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH kernel 3/3] bump version to 4.4.30-73
Signed-off-by: Fabian Grünbichler--- changelog.Debian| 6 ++ proxmox-ve/changelog.Debian | 6 ++ 2 files changed, 12 insertions(+) diff --git a/changelog.Debian b/changelog.Debian index 53739fe..04a2f8b 100644 --- a/changelog.Debian +++ b/changelog.Debian @@ -1,3 +1,9 @@ +pve-kernel (4.4.30-73) unstable; urgency=medium + + * update to Ubuntu-4.4.0-51.72 + + -- Proxmox Support Team Wed, 30 Nov 2016 09:43:00 +0100 + pve-kernel (4.4.24-72) unstable; urgency=medium * update to Ubuntu-4.4.0-47.68 diff --git a/proxmox-ve/changelog.Debian b/proxmox-ve/changelog.Debian index f4b6ed7..e421fcf 100644 --- a/proxmox-ve/changelog.Debian +++ b/proxmox-ve/changelog.Debian @@ -1,3 +1,9 @@ +proxmox-ve (4.3-73) unstable; urgency=medium + + * depend on newest 4.4.30-1-pve kernel + + -- Proxmox Support Team Wed, 30 Nov 2016 09:43:29 +0100 + proxmox-ve (4.3-72) unstable; urgency=medium * depend on newest 4.4.24-1-pve kernel -- 2.1.4 ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH kernel 1/3] update to Ubuntu 4.4.0-51.72
Signed-off-by: Fabian Grünbichler--- Makefile | 6 +++--- ubuntu-xenial.tgz | Bin 145699527 -> 145786508 bytes 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/Makefile b/Makefile index 1069ab6..4765c6a 100644 --- a/Makefile +++ b/Makefile @@ -1,8 +1,8 @@ RELEASE=4.3 # also update proxmox-ve/changelog if you change KERNEL_VER or KREL -KERNEL_VER=4.4.24 -PKGREL=72 +KERNEL_VER=4.4.30 +PKGREL=73 # also include firmware of previous version into # the fw package: fwlist-2.6.32-PREV-pve KREL=1 @@ -127,7 +127,7 @@ ${VIRTUAL_HDR_DEB} pve-headers: proxmox-ve/pve-headers.control download: rm -rf ${KERNEL_SRC} ${KERNELSRCTAR} #git clone git://kernel.ubuntu.com/ubuntu/ubuntu-vivid.git - git clone --single-branch -b Ubuntu-4.4.0-47.68 git://kernel.ubuntu.com/ubuntu/ubuntu-xenial.git ${KERNEL_SRC} + git clone --single-branch -b Ubuntu-4.4.0-51.72 git://kernel.ubuntu.com/ubuntu/ubuntu-xenial.git ${KERNEL_SRC} tar czf ${KERNELSRCTAR} --exclude .git ${KERNEL_SRC} check_gcc: diff --git a/ubuntu-xenial.tgz b/ubuntu-xenial.tgz index 1d89594..57015cd 100644 Binary files a/ubuntu-xenial.tgz and b/ubuntu-xenial.tgz differ -- 2.1.4 ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH kernel 2/3] drop sd-fix-rw_max.patch applied upstream
Signed-off-by: Fabian Grünbichler--- Makefile| 2 -- sd-fix-rw_max.patch | 60 - 2 files changed, 62 deletions(-) delete mode 100644 sd-fix-rw_max.patch diff --git a/Makefile b/Makefile index 4765c6a..d55e51d 100644 --- a/Makefile +++ b/Makefile @@ -265,8 +265,6 @@ ${KERNEL_SRC}/README ${KERNEL_CFG_ORG}: ${KERNELSRCTAR} cd ${KERNEL_SRC}; patch -p1 < ../watchdog_implement-mei-iamt-driver.patch cd ${KERNEL_SRC}; patch -p1 < ../mei_drop-watchdog-code.patch cd ${KERNEL_SRC}; patch -p1 < ../mei_bus-whitelist-watchdog-client.patch - #sd: Fix rw_max for devices that report an optimal xfer size - cd ${KERNEL_SRC}; patch -p1 < ../sd-fix-rw_max.patch # IPoIB performance regression fix cd ${KERNEL_SRC}; patch -p1 < ../IB-ipoib-move-back-the-IB-LL-address-into-the-hard-header.patch sed -i ${KERNEL_SRC}/Makefile -e 's/^EXTRAVERSION.*$$/EXTRAVERSION=${EXTRAVERSION}/' diff --git a/sd-fix-rw_max.patch b/sd-fix-rw_max.patch deleted file mode 100644 index e0752ee..000 --- a/sd-fix-rw_max.patch +++ /dev/null @@ -1,60 +0,0 @@ -From 6b7e9cde49691e04314342b7dce90c67ad567fcc Mon Sep 17 00:00:00 2001 -From: "Martin K. Petersen" -Date: Thu, 12 May 2016 22:17:34 -0400 -Subject: sd: Fix rw_max for devices that report an optimal xfer size - -For historic reasons, io_opt is in bytes and max_sectors in block layer -sectors. This interface inconsistency is error prone and should be -fixed. But for 4.4--4.7 let's make the unit difference explicit via a -wrapper function. - -Fixes: d0eb20a863ba ("sd: Optimal I/O size is in bytes, not sectors") -Cc: sta...@vger.kernel.org # 4.4+ -Reported-by: Fam Zheng -Reviewed-by: Bart Van Assche -Reviewed-by: Christoph Hellwig -Tested-by: Andrew Patterson -Signed-off-by: Martin K. Petersen - drivers/scsi/sd.c | 8 - drivers/scsi/sd.h | 5 + - 2 files changed, 9 insertions(+), 4 deletions(-) - -diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c -index f459dff..60bff78 100644 a/drivers/scsi/sd.c -+++ b/drivers/scsi/sd.c -@@ -2867,10 +2867,10 @@ static int sd_revalidate_disk(struct gendisk *disk) - if (sdkp->opt_xfer_blocks && - sdkp->opt_xfer_blocks <= dev_max && - sdkp->opt_xfer_blocks <= SD_DEF_XFER_BLOCKS && -- sdkp->opt_xfer_blocks * sdp->sector_size >= PAGE_CACHE_SIZE) -- rw_max = q->limits.io_opt = -- sdkp->opt_xfer_blocks * sdp->sector_size; -- else -+ logical_to_bytes(sdp, sdkp->opt_xfer_blocks) >= PAGE_SIZE) { -+ q->limits.io_opt = logical_to_bytes(sdp, sdkp->opt_xfer_blocks); -+ rw_max = logical_to_sectors(sdp, sdkp->opt_xfer_blocks); -+ } else - rw_max = BLK_DEF_MAX_SECTORS; - - /* Combine with controller limits */ -diff --git a/drivers/scsi/sd.h b/drivers/scsi/sd.h -index 654630b..765a6f1 100644 a/drivers/scsi/sd.h -+++ b/drivers/scsi/sd.h -@@ -151,6 +151,11 @@ static inline sector_t logical_to_sectors(struct scsi_device *sdev, sector_t blo - return blocks << (ilog2(sdev->sector_size) - 9); - } - -+static inline unsigned int logical_to_bytes(struct scsi_device *sdev, sector_t blocks) -+{ -+ return blocks * sdev->sector_size; -+} -+ - /* - * A DIF-capable target device can be formatted with different - * protection schemes. Currently 0 through 3 are defined: --- -cgit v0.12 - -- 2.1.4 ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] applied: [PATCH kernel 0/3] update to 4.4.30 / 4.4.0-51.72
already applied whole series Fabian Grünbichler (3): update to Ubuntu 4.4.0-51.72 drop sd-fix-rw_max.patch applied upstream bump version to 4.4.30-73 changelog.Debian| 6 + proxmox-ve/changelog.Debian | 6 + Makefile| 8 +++--- sd-fix-rw_max.patch | 60 ubuntu-xenial.tgz | Bin 145699527 -> 145786508 bytes 5 files changed, 15 insertions(+), 65 deletions(-) delete mode 100644 sd-fix-rw_max.patch -- 2.1.4 ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] applied: [PATCH manager] Add online help for startup/shutdown order and LXC general settings
applied ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] applied: [PATCH v3 pve-docs] Add useful examples to create/resize a lvm[-thin] pool.
applied ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH docs] Add a chapter explaining how online help is added to the GUI
--- README.adoc | 24 1 file changed, 24 insertions(+) diff --git a/README.adoc b/README.adoc index 5e1c36c..18afecc 100644 --- a/README.adoc +++ b/README.adoc @@ -239,6 +239,30 @@ text: For example, block headers can be used to add file names/paths to file content listings. + +Online Help +--- +Each {pve} installation contains the full documentation in HTML format, +which is then used as the target of various help buttons in the GUI. + +If after adding a specific entry in the documentation you want to +create a help button pointing to that, you need to do the +following: + +* add a string id in double square brackets before your +documentation entry, like `[[qm_general_settings]]` +* rebuild the `asciidoc-pve` script and the HTML chapter file containing +your entry +* add a property `onlineHelp` in the ExtJS panel you want to document, +using the above string, like `onlineHelp: qm_general_settings` +This panel has to be a child class of PVE.panel.InputPanel + +On calling `make install` the asciidoc-pve script will populate +a JS object associating the string id and a link to the +local HTML documentation, and the help button of your input panel +will point to this link. + + Screenshots --- -- 2.1.4 ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH manager] Add online help for startup/shutdown order and LXC general settings
--- www/manager6/lxc/CreateWizard.js | 1 + www/manager6/qemu/StartupEdit.js | 1 + 2 files changed, 2 insertions(+) diff --git a/www/manager6/lxc/CreateWizard.js b/www/manager6/lxc/CreateWizard.js index bd6090d..df2b633 100644 --- a/www/manager6/lxc/CreateWizard.js +++ b/www/manager6/lxc/CreateWizard.js @@ -175,6 +175,7 @@ Ext.define('PVE.lxc.CreateWizard', { { xtype: 'inputpanel', title: gettext('General'), + onlineHelp: 'pct_general', column1: [ { xtype: 'pveNodeSelector', diff --git a/www/manager6/qemu/StartupEdit.js b/www/manager6/qemu/StartupEdit.js index 6c7357d..3c832c5 100644 --- a/www/manager6/qemu/StartupEdit.js +++ b/www/manager6/qemu/StartupEdit.js @@ -1,5 +1,6 @@ Ext.define('PVE.qemu.StartupInputPanel', { extend: 'PVE.panel.InputPanel', +onlineHelp: 'qm_startup_and_shutdown', onGetValues: function(values) { var me = this; -- 2.1.4 ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel