[pve-devel] applied: [PATCH guest-common] fix config_with_pending_array for falsy current values

2020-05-20 Thread Thomas Lamprecht
one could have a config with:
> acpi: 0

and a pending deletion for that to restore the default 1 value.

The config_with_pending_array method then pushed the key twice, one
in the loop iterating the config itself correctly and once in the
pending delete hash, which is normally only for those options not yet
referenced in the config at all. Here the check was on "truthiness"
not definedness, fix that.

Signed-off-by: Thomas Lamprecht 
---
 PVE/GuestHelpers.pm | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/PVE/GuestHelpers.pm b/PVE/GuestHelpers.pm
index 1688c5c..bc53b32 100644
--- a/PVE/GuestHelpers.pm
+++ b/PVE/GuestHelpers.pm
@@ -231,8 +231,8 @@ sub config_with_pending_array {
 }
 
 while (my ($opt, $force) = each %$pending_delete_hash) {
-   next if $conf->{pending}->{$opt}; # just to be sure
-   next if $conf->{$opt};
+   next if defined($conf->{pending}->{$opt});
+   next if defined($conf->{$opt});
my $item = { key => $opt, delete => ($force ? 2 : 1)};
push @$res, $item;
 }
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH docs 1/2] zfs-local: fix #2704 add disk replacement steps for grub

2020-05-20 Thread Aaron Lauterer
The documentation only covered replacing the disk with systemd-boot but
not if grub is used.

Signed-off-by: Aaron Lauterer 
---

Once this is applied and rolled out (in the Wiki), we should remove the
old steps in the ZFS tips and tricks page [0] and link to this one.

[0] 
https://pve.proxmox.com/wiki/ZFS:_Tips_and_Tricks#Replacing_a_failed_disk_in_the_root_pool

 local-zfs.adoc | 22 +-
 1 file changed, 21 insertions(+), 1 deletion(-)

diff --git a/local-zfs.adoc b/local-zfs.adoc
index 5cce677..76a1ac2 100644
--- a/local-zfs.adoc
+++ b/local-zfs.adoc
@@ -273,12 +273,27 @@ can be used as cache.
 # zpool replace -f   
 
 
-.Changing a failed bootable device when using systemd-boot
+.Changing a failed bootable device
+
+Depending on how {pve} was installed it is either using `grub` or 
`systemd-boot`
+as bootloader (see xref:sysboot[Host Bootloader]).
+
+The first steps of copying the partition table, reissuing GUIDs and replacing
+the ZFS partition are the same. To make the system bootable from the new disk,
+different steps are needed which depend on the bootloader in use.
 
 
 # sgdisk  -R 
 # sgdisk -G 
 # zpool replace -f   
+
+
+NOTE: Use the `zpool status -v` command to monitor how far the resivlering
+process of the new disk has progressed.
+
+With `systemd-boot`:
+
+
 # pve-efiboot-tool format 
 # pve-efiboot-tool init 
 
@@ -287,6 +302,11 @@ NOTE: `ESP` stands for EFI System Partition, which is 
setup as partition #2 on
 bootable disks setup by the {pve} installer since version 5.4. For details, see
 xref:sysboot_systemd_boot_setup[Setting up a new partition for use as synced 
ESP].
 
+With `grub`:
+
+
+# grub-install 
+
 
 Activate E-Mail Notification
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH docs 2/2] sysboot: add a section on how to determine the used bootloader

2020-05-20 Thread Aaron Lauterer
Besides observing the boot process, `efibootmgr` seems to be the safest
bet. It should not show anything if booted in legacy BIOS mode. If
booted in UEFI mode, the boot entries should show which bootloader is
present.

Having both present should not happen in a regular use case AFAICT and
would need more consideration, or better, a reboot.

Signed-off-by: Aaron Lauterer 
---
 images/screenshot/boot-grub.png| Bin 0 -> 14810 bytes
 images/screenshot/boot-systemdboot.png | Bin 0 -> 4527 bytes
 system-booting.adoc|  39 +
 3 files changed, 39 insertions(+)
 create mode 100644 images/screenshot/boot-grub.png
 create mode 100644 images/screenshot/boot-systemdboot.png

diff --git a/images/screenshot/boot-grub.png b/images/screenshot/boot-grub.png
new file mode 100644
index 
..6c87bf66301f0ebe77d8513cd6bc40dc4ed20ed4
GIT binary patch
literal 14810
zcmeHtWl)^6kO!KEPt2pS0P7F>hVxVyVMgy12#LvT%SCwS1H4GlEGrLh2w%gsCA
z%y(yQ)!dq@Ti?HXs`j5%dp~QRb)NIAz1BIg>Z)=$m=u@*002ip{+$K@fJ6%bAnKtb
z{;?3hAZ7vpD9Qb`^gK1pd}v+WU99aKt!O>{T&-xWeC@0O0AEOLj-5AE=ckBg58`g*
zsR4I^{0r}&ov^O_;I5)UFl_4)RyF}nJ%q{D
zn*?^6{Y-^dfW6>eCBF*5Yp`dPfbRMc{yD!#N*EmE-xKuktcO0VxcbI-%g6QmdGjIj
z#x3|Z<7RByaNAg%zh_uoJSp$?X)ESexAnQ^kz7j!*?_Q==
z%YP8A%-{zqjithmXV%QjVH-nAIsato|3#yu(2!XYV>Vpshr19zn&8RWqux6
zJeFmjb?EW8#-aO7Dz}>La?KFbUTAPW;HTdK^5-LcglQX3`-3$jt0rU!xVg>
zAzhSQLq>GN{58BNU+N=c>1Ry^-sX9aQLge_j~|;A#!|5iX@-iGrD?{}3A_u2O0~@k
z&_FD8J^OazIeUj%9~wxz*WOwlt3NO}l0c5a@AG>(${@r7a+_?wbbXtgpj%>+o!)a6
z>xBiKb!VhfY9EMShLO+8V`wvxs>s2%rmE=i<*9yv6+CEhHD=;b$l)Z7*NP1a8%wJB
z?asXJyz!1nL6F`;Oz5rgX9U1;8n=N_ym6N1l;gz4pXZ6ZlPIPt7Ozwsu!<}$ATQKSJCB+1(U<`J~+zH0Xs{sC)4v}
z93CQ;$X1M5cMY9P{_CZjz1fVaE_&9DozQ>d=dkBANQuSKMINY+{?ew
zJXtmeRhN+)D3tJ=`a1Rho}>AY4#1(Xh>o56CLfK!OeE7**>n_tYVmINM`Q|um5MCR
zrDMh19j77f>ox1>{W0>vLO=KAZ+k|;9Wo+1)qdoeZM)b)Bj^DOl`(LdZ>x
zP65uQi;D=V=3@l?VO?9UeLAVE(IbEdGk)4{o-qmq_Fi>;<9vCh7YYu=L5TcCyiWNf
zzOaw!0h|N!U-l{)!6?&~g_|~XFMegy-3qolzXj`9>#wY&3SnTEUKUzwA=si0+eGwB
zgik7X&$n}aC`sRZ$-XC$h!4XH|pdBTpUQWKFy?A(r<{2f=DRt8?J@^QRP{EDCt
zV$L9TXGD+l(@@5&_F+|ja@yT#BPSAkxcV%bX-L04fd}PW55CzHmIRB;EmNlhIaatT
zD9!NP3umYDO;0M`@wH*v0FC8QCepbb4wP8vFc{WadN3Zxpb0JLB|y_0e5YNCi;54;
z=I2qL%Oj9<-Wxa4BD~FB<)z9t3{iUifnG^dNukE2m^C^`RLKEVpXZ$tR<6Xr`Mlbb
zaGz7bEd>_-t!I;C%tsxe!zbbguTAzROiq%>lYPSyFL)H|@n%>xWU4VWM}RIJv!&2X
z4@Inaf5kS%6s#b;NZ)^z+6kTie*JTeR
ze!PXGjI!NDIlf@2M)lYd@x)q1R|u@khdMfzOpfh#KDXZAVn9WMNIOBH8AWKM;>Ra+
zTyu6UAi-+|p4qtY9ENt4($REKI_)~X4lgcWfZ9neD_g(kcoeRgU{|a*!7{LGH`FY!
zY-We)Dk^rnm-!*Jux*o17-ZM@v3aPa3_K~fcwvXyL%1rKd0Dvr(_c|bSiT>B
zo+LTOpEX|BzI;_kYQ7r3gYF8mw-pFL;d*vw0os_0JVt+U%UBTgY&^D_#O(1QgdVpj=IwM8UFg#(g_iVJwsXX)pK@J%^MoGv-hg6A@X`8<9E+U-08XPI?D9Os*pX=
z`FD=I5W|kV)5rWC__FZrb1YK+d$Q=?*7!Vp^HY#!b29GCl2p8Gyx&!WZZ4R69vzV_g&=sV;1R4@Yz1BRweTvrTXlO+qsnLvVE1zM0n^1*)pN8SPZ9B`vQJhjEh*8m3sbicM?P
zOaqfbv?o}W{Nf44c|}1|>D8#PHStU+nPRdl7Ry36A%!*aUAh@2Axhc^C$*;+y7otq
z4l4E$Vx4<|@{19XU+L_kKt-kVF5khH)^M8^`@VKJSElWammr-DL_BAx;wQy09lGC<
z@p%jzHhe9qbR%!=2^5P&)b1WkXZ>h@9Qfr0mzSTD``U8Ap`51Xa%!o9Ke>6C
zS*Gb;Dx$Cq(S>}BM^IrM{dmuwD4PKx)BaW)-4px%XT@)~HhMxN2xU(MNPbvyJRpUp
zi}CLFuUA}TYFvJUwgdZWpFfSiru!6*Cpgb`sq$C>9DS`*V~fBalmD~o%Xk7is$HV`1etWKY0Hrre4(T%k17b9Xs>UxCx~c+tp)7*zghT_H(l>7xJU
zGJANCtL8;0Q+k!-7tojMa$3UN(fO}u!}M13d^FHq#rcRPAu9yh72N~+#zfjh4|
zQ3*2<>yPR_{cfZI#3b@{T8<0+$}olVSDU5IcaS_cnuq6Yvq*BWx!6O7{G
z%0+~78rspMjdf`CV
z5^O3y^*%Qe>$T`4u0v|ERrJ#E^6qqoERhH-!z4V
z+rw$hi%QnK5QRvB@FruNuwP6>+qLT#uFyKrB_pvC%~~Bc@CBE^KDrO6rXx)s;yE?juM>F3zet7)_K9@&6TP4`EPZ+Y|U
zfS+u|k#`Z2-Q*8`e%kj!#}*>g;pv>nX%iaP@QuO8=u5rTISlJ#YomS17#Ob07-n6(
zYKyr_Aib7}s|v2bPmMz=*xA<##!nCaDDXMGUqIf9mg`3+8Es{iEF!VPE>n1iFs
z+aXXj!FT9k=BXX?@o(d|!c29eBHm9bkHVGBtF$X!NGHgSTS&*Xva{t&
z&;%*44U63kUA?LRfWbwfn`Gv{i?OrZp*8rFy*2L1=ZH}ioBseSxm%30W`+=&=wE7hwNHXmP
z{-9%=f<1dSzDSs#{_-WZU{_{op@*)$TQz-M;ZB2u$xu)+hwO(`Jv#d)t@suxlZsY5
zjQa6kF2dLW~ZyFD!><=!_r{%o!A
z^JJ#YNOGl;qO$J&++Z*7`bX75v_iS#hYUCxbrz{Ekmhu7wpgz^7QCcGXKat
z?I>z3h7wJ3Nk-B)%sezvGKLG97tfE^>#O@wEk5e^t4-PBQY*mF$v~R4v#9Au3dh}~
zpI2z>lO|moqDwRtL1@{6Z*nmrT~Zh$4QoUt?hzVEN74I*GqXsipv6b2ql5Kv*C=l<
zls-SAWKM_|o}qk;nVIO-gdn}WIQ@LZnSJY{7|@dwQr-kkc*>ilDa`6E!|cFYM!C
zuu}Tfjo>905^q=5(0d@e`*OvcbPgu%v-CJq^!kxUmhIxM>TK{*D}0RDSh@>V@LOLb
zgnP7Al9Pg-5dkHG6!+`xBw1@U<$VHO>RghID3LMoJ9d49$3vMgdRvxuaywwzps
z>@l<=0)>Y`B#5_@EKWaX{Y?Y^g!yUO~3XVq0Y+`TLTmNoWT&
z8_p4nKfwV0TlvC5F(ZVSB|t=r97gk?R>mtabLAUM%8Y6mZT2miqg*~^PD5Y`S+C5@
zgIN4K0&->oA$KPu=Zca8BMKq<{lm{JC#cW(Jb3YuUuqmC3_FR_iat4;XHa`drr8Sm
zF|lK>nhug(tqF3hUxA~|OVzs(ONpJ29fA;5k8j?5C?#kPYm4x15U)7?csbVtBy1sM
zWnMzvKc*FCB?UGH{|wK-@n&;h>7~fI17|kC9b?Fta26Ej
zmGyqZ)T;?C3LBz6q$Ql+Rd9F6fgl|Kdf*bf&^{wGxyf0|>Kuqsv-6$^U$d{okz34F
z{mwffJMhdk8evc@Z8zTh;$b_;T=LL5$ShM#v4#h;RM?;&{l%NAY5lJp{xfCbX^A6iN
zH_!PQucu{gpFWnN&5R7F0q}lu7HRH{6TUN5nHNJjX%<=@#lw3(n2CxJtqd43`!W?c
z{KIau^!dDO_t@xpr);;n=;>HxvS8v_+(;?U@|GunHC2=BK#HC`#NTesRqVDl5F
zV@Q`=(qOkGP;>I3T~1o-S!b5NW6&2lJx5+TGLAW`12ImN

[pve-devel] [PATCH guest-common] add asObject to config_with_pending

2020-05-20 Thread Tim Marx
Signed-off-by: Tim Marx 
---
 PVE/GuestHelpers.pm | 34 --
 1 file changed, 28 insertions(+), 6 deletions(-)

diff --git a/PVE/GuestHelpers.pm b/PVE/GuestHelpers.pm
index 1688c5c..176d903 100644
--- a/PVE/GuestHelpers.pm
+++ b/PVE/GuestHelpers.pm
@@ -205,19 +205,29 @@ sub format_pending {
 # value (the current value in effect - if any)
 # pending (a new, still pending, value - if any)
 # delete (when deletions are pending, this is set to either 2 (force) or 1 
(graceful))
-sub config_with_pending_array {
-my ($conf, $pending_delete_hash) = @_;
+sub config_with_pending {
+my ($conf, $pending_delete_hash, $asObject) = @_;
 
 my $res = [];
 
+if ($asObject) {
+   $res = {};
+}
+
 foreach my $opt (keys %$conf) {
next if ref($conf->{$opt});
my $item = { key => $opt };
+   if ($asObject) {
+   $item = {};
+   }
$item->{value} = $conf->{$opt} if defined($conf->{$opt});
$item->{pending} = $conf->{pending}->{$opt} if 
defined($conf->{pending}->{$opt});
$item->{delete} = ($pending_delete_hash->{$opt}->{force} ? 2 : 1) if 
exists $pending_delete_hash->{$opt};
-
-   push @$res, $item;
+   if ($asObject) {
+   $res->{$opt} = $item;
+   } else {
+   push @$res, $item;
+   }
 }
 
 foreach my $opt (keys %{$conf->{pending}}) {
@@ -225,16 +235,28 @@ sub config_with_pending_array {
next if ref($conf->{pending}->{$opt}); # just to be sure
next if defined($conf->{$opt});
my $item = { key => $opt };
+   if ($asObject) {
+   $item = {};
+   }
$item->{pending} = $conf->{pending}->{$opt};
 
-   push @$res, $item;
+   if ($asObject) {
+   $res->{$opt} = $item;
+   } else {
+   push @$res, $item;
+   }
 }
 
 while (my ($opt, $force) = each %$pending_delete_hash) {
next if $conf->{pending}->{$opt}; # just to be sure
next if $conf->{$opt};
my $item = { key => $opt, delete => ($force ? 2 : 1)};
-   push @$res, $item;
+   if ($asObject) {
+   $res->{$opt} = $item;
+   } else {
+   $item->{key} = $opt;
+   push @$res, $item;
+   }
 }
 
 return $res;
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH container] api: add option to get pending config returned as object instead of an array

2020-05-20 Thread Tim Marx
Signed-off-by: Tim Marx 
---
 src/PVE/API2/LXC.pm | 11 +--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 21899d0..7b91b6f 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -1914,10 +1914,17 @@ __PACKAGE__->register_method({
properties => {
node => get_standard_option('pve-node'),
vmid => get_standard_option('pve-vmid', { completion => 
\::LXC::complete_ctid }),
+   object => {
+   description => "In this case the root element is an object 
instead of an array.".
+  "The key property of the items will be extracted 
and used as the root object keys.",
+   optional => 1,
+   default => 0,
+   type => 'boolean',
+   },
},
 },
 returns => {
-   type => "array",
+   type => [ "array", "object"],
items => {
type => "object",
properties => {
@@ -1952,7 +1959,7 @@ __PACKAGE__->register_method({
 
my $pending_delete_hash = 
PVE::LXC::Config->parse_pending_delete($conf->{pending}->{delete});
 
-   return PVE::GuestHelpers::config_with_pending_array($conf, 
$pending_delete_hash);
+   return PVE::GuestHelpers::config_with_pending($conf, 
$pending_delete_hash, $param->{object});
 }});
 
 1;
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server] api: add option to get pending config returned as object instead of an array

2020-05-20 Thread Tim Marx
Signed-off-by: Tim Marx 
---
 PVE/API2/Qemu.pm | 13 +
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index fd51bf3..0b31f53 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -940,14 +940,20 @@ __PACKAGE__->register_method({
check => ['perm', '/vms/{vmid}', [ 'VM.Audit' ]],
 },
 parameters => {
-   additionalProperties => 0,
properties => {
node => get_standard_option('pve-node'),
vmid => get_standard_option('pve-vmid', { completion => 
\::QemuServer::complete_vmid }),
+   object => {
+   description => "In this case the root element is an object 
instead of an array.".
+  "The key property of the items will be extracted 
and used as the root object keys.",
+   optional => 1,
+   default => 0,
+   type => 'boolean',
+   },
},
 },
 returns => {
-   type => "array",
+   type => [ "array", "object"],
items => {
type => "object",
properties => {
@@ -985,8 +991,7 @@ __PACKAGE__->register_method({
 
$conf->{cipassword} = '**' if defined($conf->{cipassword});
$conf->{pending}->{cipassword} = '** ' if 
defined($conf->{pending}->{cipassword});
-
-   return PVE::GuestHelpers::config_with_pending_array($conf, 
$pending_delete_hash);
+   return PVE::GuestHelpers::config_with_pending($conf, 
$pending_delete_hash, $param->{object});
}});
 
 # POST/PUT {vmid}/config implementation
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server 11/11] Keep track of replicated volumes via local_volumes

2020-05-20 Thread Fabian Ebner
by extending filter_local_volumes.

Signed-off-by: Fabian Ebner 
---
 PVE/QemuMigrate.pm | 38 +-
 1 file changed, 21 insertions(+), 17 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 49a0e03..e7d16c7 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -9,7 +9,7 @@ use POSIX qw( WNOHANG );
 use Time::HiRes qw( usleep );
 
 use PVE::Cluster;
-use PVE::GuestHelpers qw(safe_string_ne);
+use PVE::GuestHelpers qw(safe_boolean_ne safe_string_ne);
 use PVE::INotify;
 use PVE::RPCEnvironment;
 use PVE::Replication;
@@ -514,8 +514,11 @@ sub scan_local_volumes {
 
my $start_time = time();
my $logfunc = sub { $self->log('info', shift) };
-   $self->{replicated_volumes} = PVE::Replication::run_replication(
+   my $replicated_volumes = PVE::Replication::run_replication(
   'PVE::QemuConfig', $replication_jobcfg, $start_time, 
$start_time, $logfunc);
+   foreach my $volid (keys %{$replicated_volumes}) {
+   $local_volumes->{$volid}->{replicated} = 1;
+   }
}
 
foreach my $volid (keys %$local_volumes) {
@@ -525,7 +528,6 @@ sub scan_local_volumes {
} elsif ($self->{running} && $ref eq 'generated') {
die "can't live migrate VM with local cloudinit disk. use a 
shared storage instead\n";
} else {
-   next if $self->{replicated_volumes}->{$volid};
$local_volumes->{$volid}->{migration_mode} = 'offline';
}
}
@@ -561,13 +563,14 @@ sub config_update_local_disksizes {
 }
 
 sub filter_local_volumes {
-my ($self, $migration_mode) = @_;
+my ($self, $migration_mode, $replicated) = @_;
 
 my $volumes = $self->{local_volumes};
 my @filtered_volids;
 
 foreach my $volid (keys %{$volumes}) {
next if defined($migration_mode) && 
safe_string_ne($volumes->{$volid}->{migration_mode}, $migration_mode);
+   next if defined($replicated) && 
safe_boolean_ne($volumes->{$volid}->{replicated}, $replicated);
push @filtered_volids, $volid;
 }
 
@@ -578,7 +581,7 @@ sub sync_offline_local_volumes {
 my ($self) = @_;
 
 my $local_volumes = $self->{local_volumes};
-my @volids = $self->filter_local_volumes('offline');
+my @volids = $self->filter_local_volumes('offline', 0);
 
 my $storecfg = $self->{storecfg};
 my $opts = $self->{opts};
@@ -614,9 +617,11 @@ sub sync_offline_local_volumes {
 sub cleanup_remotedisks {
 my ($self) = @_;
 
+my $local_volumes = $self->{local_volumes};
+
 foreach my $volid (values %{$self->{volume_map}}) {
# don't clean up replicated disks!
-   next if defined($self->{replicated_volumes}->{$volid});
+   next if $local_volumes->{$volid}->{replicated};
 
my ($storeid, $volname) = PVE::Storage::parse_volume_id($volid);
 
@@ -745,10 +750,8 @@ sub phase2 {
 my $input = $spice_ticket ? "$spice_ticket\n" : "\n";
 $input .= "nbd_protocol_version: $nbd_protocol_version\n";
 
-my $number_of_online_replicated_volumes = 0;
-foreach my $volid (@online_local_volumes) {
-   next if !$self->{replicated_volumes}->{$volid};
-   $number_of_online_replicated_volumes++;
+my @online_replicated_volumes = $self->filter_local_volumes('online', 1);
+foreach my $volid (@online_replicated_volumes) {
$input .= "replicated_volume: $volid\n";
 }
 
@@ -826,7 +829,7 @@ sub phase2 {
 
 die "unable to detect remote migration address\n" if !$raddr;
 
-if (scalar(keys %$target_replicated_volumes) != 
$number_of_online_replicated_volumes) {
+if (scalar(keys %$target_replicated_volumes) != 
scalar(@online_replicated_volumes)) {
die "number of replicated disks on source and target node do not match 
- target node too old?\n"
 }
 
@@ -1192,8 +1195,10 @@ sub phase3_cleanup {
PVE::QemuConfig->write_config($vmid, $conf);
 }
 
+my @replicated_volumes = $self->filter_local_volumes(undef, 1);
+
 # transfer replication state before move config
-$self->transfer_replication_state() if $self->{replicated_volumes};
+$self->transfer_replication_state() if scalar(@replicated_volumes);
 
 # move config to remote node
 my $conffile = PVE::QemuConfig->config_file($vmid);
@@ -1202,7 +1207,7 @@ sub phase3_cleanup {
 die "Failed to move config to node '$self->{node}' - rename failed: $!\n"
 if !rename($conffile, $newconffile);
 
-$self->switch_replication_job_target() if $self->{replicated_volumes};
+$self->switch_replication_job_target() if scalar(@replicated_volumes);
 
 if ($self->{livemigration}) {
if ($self->{stopnbd}) {
@@ -1285,11 +1290,10 @@ sub phase3_cleanup {
$self->{errors} = 1;
 }
 
-# destroy local copies
-foreach my $volid (keys %{$self->{local_volumes}}) {
-   # keep replicated volumes!
-   next if $self->{replicated_volumes}->{$volid};
+my 

[pve-devel] [PATCH-SERIES qemu-server] Cleanup migration code and improve migration disk cleanup

2020-05-20 Thread Fabian Ebner
This series intends to make the migration code more readable
by simplyfing/unifying how we keep track of local volumes
and splitting up sync_disks into multiple subroutines.

This is done by keeping more information within the hash of local_volumes
we obtain in the very beginning and re-using it later. Also a method
to filter by migration/replication type is introduced, making it
possible to get rid of some special-case handling when iterating over
local volumes.

Patch #1 fixes a check when updating the re-scanned sizes
(has been sent before already, but is needed for this series).
Patches #2-#5 split up sync_disks into 3 pieces and improve
the before-mentioned rescanning.
Patch #6 for determining bwlimit earlier/less often
Patches #7-#11 improve both removal of local and remote disks
made possible by using the volume filtering.


There's still more that can be done, first thing that comes to
mind is splitting out the replication from sync_disks and further
simplyfing sync_disks; there's still a few loops doing similar
things in there.
Also $self->{local_volumes}, $self->{volume_map} and
$self->{target_drive} still have some overlap and it might
be possible to merge them somehow.

But before thinking too much about those things I wanted
to get some feedback for this and ask if this is the
right direction to go in.


Fabian Ebner (11):
  sync_disks: fix check
  update_disksize: make interface leaner
  Split sync_disks into two functions
  Avoid re-scanning all volumes
  Split out config_update_local_disksizes from scan_local_volumes
  Save targetstorage and bwlimit in local_volumes hash and re-use
information
  Add nbd migrated volumes to volume_map earlier
  cleanup_remotedisks: simplify and also include those migrated with
storage_migrate
  Simplify removal of local volumes and get rid of self->{volumes}
  Use storage_migration for checks instead of online_local_volumes
  Keep track of replicated volumes via local_volumes

 PVE/QemuMigrate.pm  | 289 +---
 PVE/QemuServer.pm   |   6 +-
 PVE/QemuServer/Drive.pm |  11 +-
 3 files changed, 160 insertions(+), 146 deletions(-)

-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server 09/11] Simplify removal of local volumes and get rid of self->{volumes}

2020-05-20 Thread Fabian Ebner
This changes the behavior to remove offline migrated volumes
only after the migration has finished successfully (mostly relevant
for mixed settings, e.g. online migration with unused/vmstate disks).

local_volumes contains both, the volumes previously in $self->{volumes}
and the volumes in $self->{online_local_volumes}, and hence is the place
to look for which volumes we need to remove. Of course, replicated
volumes still need to be skipped.

Signed-off-by: Fabian Ebner 
---

Who needs phase3 anyways ;)?

 PVE/QemuMigrate.pm | 45 -
 1 file changed, 12 insertions(+), 33 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 421b9d7..a6f42df 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -292,7 +292,6 @@ sub scan_local_volumes {
 
 # local volumes which have been copied
 # and their old_id => new_id pairs
-$self->{volumes} = [];
 $self->{volume_map} = {};
 $self->{local_volumes} = {};
 
@@ -523,14 +522,10 @@ sub scan_local_volumes {
my $ref = $local_volumes->{$volid}->{ref};
if ($self->{running} && $ref eq 'config') {
push @{$self->{online_local_volumes}}, $volid;
-   } elsif ($ref eq 'generated') {
-   die "can't live migrate VM with local cloudinit disk. use a 
shared storage instead\n" if $self->{running};
-   # skip all generated volumes but queue them for deletion in 
phase3_cleanup
-   push @{$self->{volumes}}, $volid;
-   next;
+   } elsif ($self->{running} && $ref eq 'generated') {
+   die "can't live migrate VM with local cloudinit disk. use a 
shared storage instead\n";
} else {
next if $self->{replicated_volumes}->{$volid};
-   push @{$self->{volumes}}, $volid;
$local_volumes->{$volid}->{migration_mode} = 'offline';
}
}
@@ -1160,18 +1155,7 @@ sub phase2_cleanup {
 sub phase3 {
 my ($self, $vmid) = @_;
 
-my $volids = $self->{volumes};
-return if $self->{phase2errors};
-
-# destroy local copies
-foreach my $volid (@$volids) {
-   eval { PVE::Storage::vdisk_free($self->{storecfg}, $volid); };
-   if (my $err = $@) {
-   $self->log('err', "removing local copy of '$volid' failed - $err");
-   $self->{errors} = 1;
-   last if $err =~ /^interrupted by signal$/;
-   }
-}
+return;
 }
 
 sub phase3_cleanup {
@@ -1303,22 +1287,17 @@ sub phase3_cleanup {
$self->{errors} = 1;
 }
 
-if($self->{storage_migration}) {
-   # destroy local copies
-   my $volids = $self->{online_local_volumes};
-
-   foreach my $volid (@$volids) {
-   # keep replicated volumes!
-   next if $self->{replicated_volumes}->{$volid};
+# destroy local copies
+foreach my $volid (keys %{$self->{local_volumes}}) {
+   # keep replicated volumes!
+   next if $self->{replicated_volumes}->{$volid};
 
-   eval { PVE::Storage::vdisk_free($self->{storecfg}, $volid); };
-   if (my $err = $@) {
-   $self->log('err', "removing local copy of '$volid' failed - 
$err");
-   $self->{errors} = 1;
-   last if $err =~ /^interrupted by signal$/;
-   }
+   eval { PVE::Storage::vdisk_free($self->{storecfg}, $volid); };
+   if (my $err = $@) {
+   $self->log('err', "removing local copy of '$volid' failed - $err");
+   $self->{errors} = 1;
+   last if $err =~ /^interrupted by signal$/;
}
-
 }
 
 # clear migrate lock
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server 08/11] cleanup_remotedisks: simplify and also include those migrated with storage_migrate

2020-05-20 Thread Fabian Ebner
by using the information from volume_map. Call cleanup_remotedisks in
phase1_cleanup as well, because that's where we end if sync_disks
fails and some disks might already have been transfered successfully.

Signed-off-by: Fabian Ebner 
---
 PVE/QemuMigrate.pm | 19 ++-
 1 file changed, 6 insertions(+), 13 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 481e1ba..421b9d7 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -619,16 +619,11 @@ sub sync_offline_local_volumes {
 sub cleanup_remotedisks {
 my ($self) = @_;
 
-foreach my $target_drive (keys %{$self->{target_drive}}) {
-   my $drivestr = $self->{target_drive}->{$target_drive}->{drivestr};
-   next if !defined($drivestr);
-
-   my $drive = PVE::QemuServer::parse_drive($target_drive, $drivestr);
-
+foreach my $volid (values %{$self->{volume_map}}) {
# don't clean up replicated disks!
-   next if defined($self->{replicated_volumes}->{$drive->{file}});
+   next if defined($self->{replicated_volumes}->{$volid});
 
-   my ($storeid, $volname) = PVE::Storage::parse_volume_id($drive->{file});
+   my ($storeid, $volname) = PVE::Storage::parse_volume_id($volid);
 
my $cmd = [@{$self->{rem_ssh}}, 'pvesm', 'free', "$storeid:$volname"];
 
@@ -683,11 +678,9 @@ sub phase1_cleanup {
$self->log('err', $err);
 }
 
-if ($self->{volumes}) {
-   foreach my $volid (@{$self->{volumes}}) {
-   $self->log('err', "found stale volume copy '$volid' on node 
'$self->{node}'");
-   # fixme: try to remove ?
-   }
+eval { $self->cleanup_remotedisks() };
+if (my $err = $@) {
+   $self->log('err', $err);
 }
 
 eval { $self->cleanup_bitmaps() };
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server 07/11] Add nbd migrated volumes to volume_map earlier

2020-05-20 Thread Fabian Ebner
This makes sure that they are present in volume_map as soon as
the remote node tells us, that they have been allocated.

Signed-off-by: Fabian Ebner 
---

Makes the cleanup_remotedisks simplyfication in the next patch possible.

Another idea would be to do it in its own loop, after obtaining the
information from the remote 'qm start' rather than during that.

 PVE/QemuMigrate.pm | 30 ++
 1 file changed, 18 insertions(+), 12 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 777ba2e..481e1ba 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -765,6 +765,22 @@ sub phase2 {
}
 }
 
+my $handle_storage_migration_listens = sub {
+   my ($drive_key, $drivestr, $nbd_uri) = @_;
+
+   $self->{stopnbd} = 1;
+   $self->{target_drive}->{$drive_key}->{drivestr} = $drivestr;
+   $self->{target_drive}->{$drive_key}->{nbd_uri} = $nbd_uri;
+
+   my $source_drive = PVE::QemuServer::parse_drive($drive_key, 
$conf->{$drive_key});
+   my $target_drive = PVE::QemuServer::parse_drive($drive_key, $drivestr);
+   my $source_volid = $source_drive->{file};
+   my $target_volid = $target_drive->{file};
+
+   $self->{volume_map}->{$source_volid} = $target_volid;
+   $self->log('info', "volume '$source_volid' is '$target_volid' on the 
target\n");
+};
+
 my $target_replicated_volumes = {};
 
 # Note: We try to keep $spice_ticket secret (do not pass via command line 
parameter)
@@ -796,9 +812,7 @@ sub phase2 {
my $targetdrive = $3;
$targetdrive =~ s/drive-//g;
 
-   $self->{stopnbd} = 1;
-   $self->{target_drive}->{$targetdrive}->{drivestr} = $drivestr;
-   $self->{target_drive}->{$targetdrive}->{nbd_uri} = $nbd_uri;
+   $handle_storage_migration_listens->($targetdrive, $drivestr, 
$nbd_uri);
} elsif ($line =~ m!^storage migration listens on 
nbd:unix:(/run/qemu-server/(\d+)_nbd\.migrate):exportname=(\S+) volume:(\S+)$!) 
{
my $drivestr = $4;
die "Destination UNIX socket's VMID does not match source VMID" if 
$vmid ne $2;
@@ -807,9 +821,7 @@ sub phase2 {
my $targetdrive = $3;
$targetdrive =~ s/drive-//g;
 
-   $self->{stopnbd} = 1;
-   $self->{target_drive}->{$targetdrive}->{drivestr} = $drivestr;
-   $self->{target_drive}->{$targetdrive}->{nbd_uri} = $nbd_uri;
+   $handle_storage_migration_listens->($targetdrive, $drivestr, 
$nbd_uri);
$unix_socket_info->{$nbd_unix_addr} = 1;
} elsif ($line =~ m/^re-using replicated volume: (\S+) - (.*)$/) {
my $drive = $1;
@@ -902,19 +914,13 @@ sub phase2 {
my $nbd_uri = $target->{nbd_uri};
 
my $source_drive = PVE::QemuServer::parse_drive($drive, 
$conf->{$drive});
-   my $target_drive = PVE::QemuServer::parse_drive($drive, 
$target->{drivestr});
-
my $source_volid = $source_drive->{file};
-   my $target_volid = $target_drive->{file};
 
my $bwlimit = $local_volumes->{$source_volid}->{bwlimit};
my $bitmap = $target->{bitmap};
 
$self->log('info', "$drive: start migration to $nbd_uri");
PVE::QemuServer::qemu_drive_mirror($vmid, $drive, $nbd_uri, $vmid, 
undef, $self->{storage_migration_jobs}, 'skip', undef, $bwlimit, $bitmap);
-
-   $self->{volume_map}->{$source_volid} = $target_volid;
-   $self->log('info', "volume '$source_volid' is '$target_volid' on 
the target\n");
}
 }
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server 03/11] Split sync_disks into two functions

2020-05-20 Thread Fabian Ebner
by making local_volumes class-accessible. One functions is for scanning all 
local
volumes and one is for actually syncing offline volumes via storage_migrate.

Signed-off-by: Fabian Ebner 
---
 PVE/QemuMigrate.pm | 98 ++
 1 file changed, 64 insertions(+), 34 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 96de0db..e65b28f 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -9,6 +9,7 @@ use POSIX qw( WNOHANG );
 use Time::HiRes qw( usleep );
 
 use PVE::Cluster;
+use PVE::GuestHelpers qw(safe_string_ne);
 use PVE::INotify;
 use PVE::RPCEnvironment;
 use PVE::Replication;
@@ -284,7 +285,7 @@ sub prepare {
 return $running;
 }
 
-sub sync_disks {
+sub scan_local_volumes {
 my ($self, $vmid) = @_;
 
 my $conf = $self->{vmconf};
@@ -293,12 +294,13 @@ sub sync_disks {
 # and their old_id => new_id pairs
 $self->{volumes} = [];
 $self->{volume_map} = {};
+$self->{local_volumes} = {};
 
 my $storecfg = $self->{storecfg};
 eval {
 
# found local volumes and their origin
-   my $local_volumes = {};
+   my $local_volumes = $self->{local_volumes};
my $local_volumes_errors = {};
my $other_errors = [];
my $abort = 0;
@@ -537,11 +539,7 @@ sub sync_disks {
PVE::QemuServer::update_efidisk_size($conf);
}
 
-   $self->log('info', "copying local disk images") if 
scalar(%$local_volumes);
-
foreach my $volid (keys %$local_volumes) {
-   my ($sid, $volname) = PVE::Storage::parse_volume_id($volid);
-   my $targetsid = 
PVE::QemuServer::map_storage($self->{opts}->{storagemap}, $sid);
my $ref = $local_volumes->{$volid}->{ref};
if ($self->{running} && $ref eq 'config') {
push @{$self->{online_local_volumes}}, $volid;
@@ -553,34 +551,65 @@ sub sync_disks {
} else {
next if $self->{replicated_volumes}->{$volid};
push @{$self->{volumes}}, $volid;
-   my $opts = $self->{opts};
-   # use 'migrate' limit for transfer to other node
-   my $bwlimit = PVE::Storage::get_bandwidth_limit('migration', 
[$targetsid, $sid], $opts->{bwlimit});
-   # JSONSchema and get_bandwidth_limit use kbps - storage_migrate 
bps
-   $bwlimit = $bwlimit * 1024 if defined($bwlimit);
-
-   my $storage_migrate_opts = {
-   'bwlimit' => $bwlimit,
-   'insecure' => $opts->{migration_type} eq 'insecure',
-   'with_snapshots' => $local_volumes->{$volid}->{snapshots},
-   'allow_rename' => !$local_volumes->{$volid}->{is_vmstate},
-   };
-
-   my $logfunc = sub { $self->log('info', $_[0]); };
-   my $new_volid = eval {
-   PVE::Storage::storage_migrate($storecfg, $volid, 
$self->{ssh_info},
- $targetsid, 
$storage_migrate_opts, $logfunc);
-   };
-   if (my $err = $@) {
-   die "storage migration for '$volid' to storage '$targetsid' 
failed - $err\n";
-   }
-
-   $self->{volume_map}->{$volid} = $new_volid;
-   $self->log('info', "volume '$volid' is '$new_volid' on the 
target\n");
+   $local_volumes->{$volid}->{migration_mode} = 'offline';
}
}
 };
-die "Failed to sync data - $@" if $@;
+die "Problem found while scanning volumes - $@" if $@;
+}
+
+sub filter_local_volumes {
+my ($self, $migration_mode) = @_;
+
+my $volumes = $self->{local_volumes};
+my @filtered_volids;
+
+foreach my $volid (keys %{$volumes}) {
+   next if defined($migration_mode) && 
safe_string_ne($volumes->{$volid}->{migration_mode}, $migration_mode);
+   push @filtered_volids, $volid;
+}
+
+return @filtered_volids;
+}
+
+sub sync_offline_local_volumes {
+my ($self) = @_;
+
+my $local_volumes = $self->{local_volumes};
+my @volids = $self->filter_local_volumes('offline');
+
+my $storecfg = $self->{storecfg};
+my $opts = $self->{opts};
+
+$self->log('info', "copying local disk images") if scalar(@volids);
+
+foreach my $volid (@volids) {
+   my ($sid, $volname) = PVE::Storage::parse_volume_id($volid);
+   my $targetsid = 
PVE::QemuServer::map_storage($self->{opts}->{storagemap}, $sid);
+   # use 'migration' limit for transfer to other node
+   my $bwlimit = PVE::Storage::get_bandwidth_limit('migration', 
[$targetsid, $sid], $opts->{bwlimit});
+   # JSONSchema and get_bandwidth_limit use kbps - storage_migrate bps
+   $bwlimit = $bwlimit * 1024 if defined($bwlimit);
+
+   my $storage_migrate_opts = {
+   'bwlimit' => $bwlimit,
+   'insecure' => $opts->{migration_type} eq 'insecure',
+   'with_snapshots' => $local_volumes->{$volid}->{snapshots},
+   

[pve-devel] [PATCH qemu-server 01/11] sync_disks: fix check

2020-05-20 Thread Fabian Ebner
Signed-off-by: Fabian Ebner 
---

This is a re-send of a previously stand-alone patch.

 PVE/QemuMigrate.pm | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index b729940..f6baeda 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -519,7 +519,9 @@ sub sync_disks {
PVE::QemuConfig->foreach_volume($conf, sub {
my ($key, $drive) = @_;
return if $key eq 'efidisk0'; # skip efidisk, will be handled later
-   return if !defined($local_volumes->{$key}); # only update sizes for 
local volumes
+
+   my $volid = $drive->{file};
+   return if !defined($local_volumes->{$volid}); # only update sizes 
for local volumes
 
my ($updated, $old_size, $new_size) = 
PVE::QemuServer::Drive::update_disksize($drive, $volid_hash);
if (defined($updated)) {
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server 10/11] Use storage_migration for checks instead of online_local_volumes

2020-05-20 Thread Fabian Ebner
Like this we don't need to worry about auto-vivifaction.

Signed-off-by: Fabian Ebner 
---
 PVE/QemuMigrate.pm | 24 +++-
 1 file changed, 11 insertions(+), 13 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index a6f42df..49a0e03 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -521,7 +521,7 @@ sub scan_local_volumes {
foreach my $volid (keys %$local_volumes) {
my $ref = $local_volumes->{$volid}->{ref};
if ($self->{running} && $ref eq 'config') {
-   push @{$self->{online_local_volumes}}, $volid;
+   $local_volumes->{$volid}->{migration_mode} = 'online';
} elsif ($self->{running} && $ref eq 'generated') {
die "can't live migrate VM with local cloudinit disk. use a 
shared storage instead\n";
} else {
@@ -690,6 +690,9 @@ sub phase2 {
 
 my $conf = $self->{vmconf};
 my $local_volumes = $self->{local_volumes};
+my @online_local_volumes = $self->filter_local_volumes('online');
+
+$self->{storage_migration} = 1 if scalar(@online_local_volumes);
 
 $self->log('info', "starting VM $vmid on remote node '$self->{node}'");
 
@@ -730,7 +733,7 @@ sub phase2 {
push @$cmd, '--force-cpu', $self->{forcecpu};
 }
 
-if ($self->{online_local_volumes}) {
+if ($self->{storage_migration}) {
push @$cmd, '--targetstorage', ($self->{opts}->{targetstorage} // '1');
 }
 
@@ -743,14 +746,10 @@ sub phase2 {
 $input .= "nbd_protocol_version: $nbd_protocol_version\n";
 
 my $number_of_online_replicated_volumes = 0;
-
-# prevent auto-vivification
-if ($self->{online_local_volumes}) {
-   foreach my $volid (@{$self->{online_local_volumes}}) {
-   next if !$self->{replicated_volumes}->{$volid};
-   $number_of_online_replicated_volumes++;
-   $input .= "replicated_volume: $volid\n";
-   }
+foreach my $volid (@online_local_volumes) {
+   next if !$self->{replicated_volumes}->{$volid};
+   $number_of_online_replicated_volumes++;
+   $input .= "replicated_volume: $volid\n";
 }
 
 my $handle_storage_migration_listens = sub {
@@ -890,13 +889,12 @@ sub phase2 {
 }
 my $start = time();
 
-if (defined($self->{online_local_volumes})) {
-   $self->{storage_migration} = 1;
+if ($self->{storage_migration}) {
$self->{storage_migration_jobs} = {};
$self->log('info', "starting storage migration");
 
die "The number of local disks does not match between the source and 
the destination.\n"
-   if (scalar(keys %{$self->{target_drive}}) != scalar 
@{$self->{online_local_volumes}});
+   if (scalar(keys %{$self->{target_drive}}) != 
scalar(@online_local_volumes));
foreach my $drive (keys %{$self->{target_drive}}){
my $target = $self->{target_drive}->{$drive};
my $nbd_uri = $target->{nbd_uri};
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server 05/11] Split out config_update_local_disksizes from scan_local_volumes

2020-05-20 Thread Fabian Ebner
Signed-off-by: Fabian Ebner 
---
 PVE/QemuMigrate.pm | 55 ++
 1 file changed, 31 insertions(+), 24 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 3b138c4..152cb25 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -516,28 +516,6 @@ sub scan_local_volumes {
   'PVE::QemuConfig', $replication_jobcfg, $start_time, 
$start_time, $logfunc);
}
 
-   # sizes in config have to be accurate for remote node to correctly
-   # allocate disks
-   PVE::QemuConfig->foreach_volume($conf, sub {
-   my ($key, $drive) = @_;
-   return if $key eq 'efidisk0'; # skip efidisk, will be handled later
-
-   my $volid = $drive->{file};
-   return if !defined($local_volumes->{$volid}); # only update sizes 
for local volumes
-
-   my ($updated, $msg) = 
PVE::QemuServer::Drive::update_disksize($drive, 
$local_volumes->{$volid}->{size});
-   if (defined($updated)) {
-   $conf->{$key} = PVE::QemuServer::print_drive($updated);
-   $self->log('info', "drive '$key': $msg");
-   }
-   });
-
-   # we want to set the efidisk size in the config to the size of the
-   # real OVMF_VARS.fd image, else we can create a too big image, which 
does not work
-   if (defined($conf->{efidisk0})) {
-   PVE::QemuServer::update_efidisk_size($conf);
-   }
-
foreach my $volid (keys %$local_volumes) {
my $ref = $local_volumes->{$volid}->{ref};
if ($self->{running} && $ref eq 'config') {
@@ -557,6 +535,33 @@ sub scan_local_volumes {
 die "Problem found while scanning volumes - $@" if $@;
 }
 
+sub config_update_local_disksizes {
+my ($self) = @_;
+
+my $conf = $self->{vmconf};
+my $local_volumes = $self->{local_volumes};
+
+PVE::QemuConfig->foreach_volume($conf, sub {
+   my ($key, $drive) = @_;
+   return if $key eq 'efidisk0'; # skip efidisk, will be handled later
+
+   my $volid = $drive->{file};
+   return if !defined($local_volumes->{$volid}); # only update sizes for 
local volumes
+
+   my ($updated, $msg) = PVE::QemuServer::Drive::update_disksize($drive, 
$local_volumes->{$volid}->{size});
+   if (defined($updated)) {
+   $conf->{$key} = PVE::QemuServer::print_drive($updated);
+   $self->log('info', "drive '$key': $msg");
+   }
+});
+
+# we want to set the efidisk size in the config to the size of the
+# real OVMF_VARS.fd image, else we can create a too big image, which does 
not work
+if (defined($conf->{efidisk0})) {
+   PVE::QemuServer::update_efidisk_size($conf);
+}
+}
+
 sub filter_local_volumes {
 my ($self, $migration_mode) = @_;
 
@@ -656,9 +661,11 @@ sub phase1 {
 $conf->{lock} = 'migrate';
 PVE::QemuConfig->write_config($vmid, $conf);
 
-# scan_local_volumes fixes disk sizes to match their actual size, write 
changes so
-# target allocates correct volumes
 $self->scan_local_volumes($vmid);
+
+# fix disk sizes to match their actual size and write changes,
+# so that the target allocates the correct volumes
+$self->config_update_local_disksizes();
 PVE::QemuConfig->write_config($vmid, $conf);
 
 $self->sync_offline_local_volumes();
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server 06/11] Save targetstorage and bwlimit in local_volumes hash and re-use information

2020-05-20 Thread Fabian Ebner
It is enough to call get_bandwith_limit once for each source_storage.

Signed-off-by: Fabian Ebner 
---
 PVE/QemuMigrate.pm | 22 +-
 1 file changed, 9 insertions(+), 13 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 152cb25..777ba2e 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -338,11 +338,14 @@ sub scan_local_volumes {
if !$target_scfg->{content}->{images};
}
 
+   my $bwlimit = PVE::Storage::get_bandwidth_limit('migration', 
[$targetsid, $storeid], $self->{opts}->{bwlimit});
PVE::Storage::foreach_volid($dl, sub {
my ($volid, $sid, $volinfo) = @_;
 
$local_volumes->{$volid}->{ref} = 'storage';
$local_volumes->{$volid}->{size} = $volinfo->{size};
+   $local_volumes->{$volid}->{targetsid} = $targetsid;
+   $local_volumes->{$volid}->{bwlimit} = $bwlimit;
 
# If with_snapshots is not set for storage migrate, it tries to 
use
# a raw+size stream, but on-the-fly conversion from qcow2 to 
raw+size
@@ -588,12 +591,9 @@ sub sync_offline_local_volumes {
 $self->log('info', "copying local disk images") if scalar(@volids);
 
 foreach my $volid (@volids) {
-   my ($sid, $volname) = PVE::Storage::parse_volume_id($volid);
-   my $targetsid = 
PVE::QemuServer::map_storage($self->{opts}->{storagemap}, $sid);
-   # use 'migration' limit for transfer to other node
-   my $bwlimit = PVE::Storage::get_bandwidth_limit('migration', 
[$targetsid, $sid], $opts->{bwlimit});
-   # JSONSchema and get_bandwidth_limit use kbps - storage_migrate bps
-   $bwlimit = $bwlimit * 1024 if defined($bwlimit);
+   my $targetsid = $local_volumes->{$volid}->{targetsid};
+   my $bwlimit = $local_volumes->{$volid}->{bwlimit};
+   $bwlimit = $bwlimit * 1024 if defined($bwlimit); # storage_migrate uses 
bps
 
my $storage_migrate_opts = {
'bwlimit' => $bwlimit,
@@ -701,6 +701,7 @@ sub phase2 {
 my ($self, $vmid) = @_;
 
 my $conf = $self->{vmconf};
+my $local_volumes = $self->{local_volumes};
 
 $self->log('info', "starting VM $vmid on remote node '$self->{node}'");
 
@@ -889,8 +890,6 @@ sub phase2 {
 }
 my $start = time();
 
-my $opt_bwlimit = $self->{opts}->{bwlimit};
-
 if (defined($self->{online_local_volumes})) {
$self->{storage_migration} = 1;
$self->{storage_migration_jobs} = {};
@@ -908,10 +907,7 @@ sub phase2 {
my $source_volid = $source_drive->{file};
my $target_volid = $target_drive->{file};
 
-   my $source_sid = 
PVE::Storage::Plugin::parse_volume_id($source_volid);
-   my $target_sid = 
PVE::Storage::Plugin::parse_volume_id($target_volid);
-
-   my $bwlimit = PVE::Storage::get_bandwidth_limit('migration', 
[$source_sid, $target_sid], $opt_bwlimit);
+   my $bwlimit = $local_volumes->{$source_volid}->{bwlimit};
my $bitmap = $target->{bitmap};
 
$self->log('info', "$drive: start migration to $nbd_uri");
@@ -938,7 +934,7 @@ sub phase2 {
 
 # migrate speed can be set via bwlimit (datacenter.cfg and API) and via the
 # migrate_speed parameter in qm.conf - take the lower of the two.
-my $bwlimit = PVE::Storage::get_bandwidth_limit('migration', undef, 
$opt_bwlimit) // 0;
+my $bwlimit = PVE::Storage::get_bandwidth_limit('migration', undef, 
$self->{opts}->{bwlimit}) // 0;
 my $migrate_speed = $conf->{migrate_speed} // $bwlimit;
 # migrate_speed is in MB/s, bwlimit in KB/s
 $migrate_speed *= 1024;
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server 02/11] update_disksize: make interface leaner

2020-05-20 Thread Fabian Ebner
Pass new size directly, so the function doesn't need to know about
how some hash is organized. And return a message directly, instead
of both size-strings. Also dropped the wantarray, because both
existing callers use the message anyways.

Signed-off-by: Fabian Ebner 
---
 PVE/QemuMigrate.pm  |  5 +++--
 PVE/QemuServer.pm   |  6 +++---
 PVE/QemuServer/Drive.pm | 11 +--
 3 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index f6baeda..96de0db 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -522,11 +522,12 @@ sub sync_disks {
 
my $volid = $drive->{file};
return if !defined($local_volumes->{$volid}); # only update sizes 
for local volumes
+   return if !defined($volid_hash->{$volid});
 
-   my ($updated, $old_size, $new_size) = 
PVE::QemuServer::Drive::update_disksize($drive, $volid_hash);
+   my ($updated, $msg) = 
PVE::QemuServer::Drive::update_disksize($drive, $volid_hash->{$volid}->{size});
if (defined($updated)) {
$conf->{$key} = PVE::QemuServer::print_drive($updated);
-   $self->log('info', "size of disk '$updated->{file}' ($key) 
updated from $old_size to $new_size\n");
+   $self->log('info', "drive '$key': $msg");
}
});
 
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index dcf05df..506f6fb 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -5932,7 +5932,7 @@ sub update_disk_config {
 my ($vmid, $conf, $volid_hash) = @_;
 
 my $changes;
-my $prefix = "VM $vmid:";
+my $prefix = "VM $vmid";
 
 # used and unused disks
 my $referenced = {};
@@ -5960,11 +5960,11 @@ sub update_disk_config {
return if drive_is_cdrom($drive);
return if !$volid_hash->{$volid};
 
-   my ($updated, $old_size, $new_size) = 
PVE::QemuServer::Drive::update_disksize($drive, $volid_hash);
+   my ($updated, $msg) = PVE::QemuServer::Drive::update_disksize($drive, 
$volid_hash->{$volid}->{size});
if (defined($updated)) {
$changes = 1;
$conf->{$opt} = print_drive($updated);
-   print "$prefix size of disk '$volid' ($opt) updated from $old_size 
to $new_size\n";
+   print "$prefix ($opt): $msg\n";
}
 });
 
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index f84333f..91c33f8 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -522,14 +522,11 @@ sub bootdisk_size {
 }
 
 sub update_disksize {
-my ($drive, $volid_hash) = @_;
+my ($drive, $newsize) = @_;
 
-my $volid = $drive->{file};
-return undef if !defined($volid);
-return undef if !defined($volid_hash->{$volid}) || 
!defined($volid_hash->{$volid}->{size});
+return undef if !defined($newsize);
 
 my $oldsize = $drive->{size} // 0;
-my $newsize = $volid_hash->{$volid}->{size};
 
 if ($newsize != $oldsize) {
$drive->{size} = $newsize;
@@ -537,7 +534,9 @@ sub update_disksize {
my $old_fmt = PVE::JSONSchema::format_size($oldsize);
my $new_fmt = PVE::JSONSchema::format_size($newsize);
 
-   return wantarray ? ($drive, $old_fmt, $new_fmt) : $drive;
+   my $msg = "size of disk '$drive->{file}' updated from $old_fmt to 
$new_fmt";
+
+   return ($drive, $msg);
 }
 
 return undef;
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server 04/11] Avoid re-scanning all volumes

2020-05-20 Thread Fabian Ebner
by using the information obtained in the first scan. This
also makes sure we only scan local storages.

Signed-off-by: Fabian Ebner 
---
 PVE/QemuMigrate.pm | 7 +++
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index e65b28f..3b138c4 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -342,6 +342,7 @@ sub scan_local_volumes {
my ($volid, $sid, $volinfo) = @_;
 
$local_volumes->{$volid}->{ref} = 'storage';
+   $local_volumes->{$volid}->{size} = $volinfo->{size};
 
# If with_snapshots is not set for storage migrate, it tries to 
use
# a raw+size stream, but on-the-fly conversion from qcow2 to 
raw+size
@@ -516,17 +517,15 @@ sub scan_local_volumes {
}
 
# sizes in config have to be accurate for remote node to correctly
-   # allocate disks, rescan to be sure
-   my $volid_hash = PVE::QemuServer::scan_volids($storecfg, $vmid);
+   # allocate disks
PVE::QemuConfig->foreach_volume($conf, sub {
my ($key, $drive) = @_;
return if $key eq 'efidisk0'; # skip efidisk, will be handled later
 
my $volid = $drive->{file};
return if !defined($local_volumes->{$volid}); # only update sizes 
for local volumes
-   return if !defined($volid_hash->{$volid});
 
-   my ($updated, $msg) = 
PVE::QemuServer::Drive::update_disksize($drive, $volid_hash->{$volid}->{size});
+   my ($updated, $msg) = 
PVE::QemuServer::Drive::update_disksize($drive, 
$local_volumes->{$volid}->{size});
if (defined($updated)) {
$conf->{$key} = PVE::QemuServer::print_drive($updated);
$self->log('info', "drive '$key': $msg");
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] applied-series: Re: [PATCH V2 pve-network 0/6] always use a vnet bridge model

2020-05-20 Thread Alexandre DERUMIER
>>pplied series, thanks! There could be probably some improved error 
>>checking/warnings,
>>e.g., if a QinQ bridge exists, ideally with a "Info" column in the general 
>>SDN status
>>panel where such erros are then also visible (as it's checked on apply).

yes, I need to improve that.
I'll look at this next week.


- Mail original -
De: "Thomas Lamprecht" 
À: "pve-devel" , "aderumier" 
Envoyé: Mardi 19 Mai 2020 21:45:15
Objet: applied-series: Re: [pve-devel] [PATCH V2 pve-network 0/6] always use a 
vnet bridge model

On 5/19/20 6:37 PM, Alexandre Derumier wrote: 
> Hi, 
> 
> this is a rework of vlan && qinq plugin to always use a vnet=20 
> bridge for all plugin. 
> 
> Avoid to tag vmport directly on vlan-aware bridge or ovs. 
> Instead, plug the vm interface on a vnet bridge, and plug the 
> vnet bridge on the ovs|bridge-vlan-aware|... and do the tag here. 
> 
> This will help a lot of handling tag modification on a vnet, as we=20 
> don't defined tap|veth in /etc/network/interfaces, 
> and make same vnet model for each plugins. 
> 
> Qinq plugin was buggy, I have fixed it and added support for 
> ovs+classic linux bridge. 
> I have also added vlan-protocol option back, to handle=20 
> both 802.1ad && 802.1q service vlan. 
> 
> I'll update documentation && gui tomorrow 
> 
> 
> Changelog v2: 
> 
> - keep interfaces sort 
> - fix the vnet name character to 8 
> 
> Alexandre Derumier (6): 
> restrict zone name to 8 characters 
> retrict vnet name to 8 characters 
> vlan: use new model 
> qinq: use new model + ovs/classic bridge support 
> evpn: prefix interfaces 
> vxlan: prefix interfaces 
> 
> PVE/Network/SDN/Controllers/EvpnPlugin.pm | 2 +- 
> PVE/Network/SDN/VnetPlugin.pm | 2 +- 
> PVE/Network/SDN/Zones/EvpnPlugin.pm | 24 ++-- 
> PVE/Network/SDN/Zones/Plugin.pm | 2 +- 
> PVE/Network/SDN/Zones/QinQPlugin.pm | 133 +- 
> PVE/Network/SDN/Zones/VlanPlugin.pm | 90 ++- 
> PVE/Network/SDN/Zones/VxlanPlugin.pm | 6 +- 
> 7 files changed, 183 insertions(+), 76 deletions(-) 
> 



applied series, thanks! There could be probably some improved error 
checking/warnings, 
e.g., if a QinQ bridge exists, ideally with a "Info" column in the general SDN 
status 
panel where such erros are then also visible (as it's checked on apply). 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel