Re: [pve-devel] [PATCH v2 qemu-server 4/4] add unix socket support for NBD storage migration

2020-03-19 Thread Thomas Lamprecht
On 3/18/20 11:26 AM, Mira Limbeck wrote:
> 
>> Thomas Lamprecht  hat am 18. März 2020 10:32 
>> geschrieben:
>>
>>  
>> On 3/18/20 10:11 AM, Fabian Grünbichler wrote:
> @@ -594,10 +597,16 @@ sub phase2 {
>   }
>   
>   my $spice_port;
> +my $tunnel_addr = [];
> +my $sock_addr = [];
> +# version > 0 for unix socket support
> +my $nbd_protocol_version = 1;
> +my $input = "nbd_protocol_version: $nbd_protocol_version\n";
> +$input .= "spice_ticket: $spice_ticket\n" if $spice_ticket;
 I know it's already been applied, but this breaks new->old migration 
 (with SPICE) for no reason, doesn't it? I know we don't always support 
 it, but I don't see why we need the break here..

 I.e. if we just send the spice-ticket w/o a prefix an old node will be 
 perfectly happy and just send us back the TCP migration URI (and with 
 the fallback in place a new node will also be happy), but if we do 
 prefix it the remote will pick up the entire "spice_ticket: xxx" as the 
 ticket.
>>> yes, we could revert that back to the old behaviour and add a FIXME: add 
>>> prefix 'spice_ticket: ' with 7.0?
>>>
>>
>> yeah, please. And nice work catching this Stefan!
> Yes, looked over it again and again. Even tested live migration with spice 
> and virt-viewer, but in this case probably missed the new -> old migration.
> Should have waited till today before sending it.
> 

I'd like to bump qemu-server soon and am missing from the pile of known issues,
can you please take a look at this (tomorrow)? That'd be great :)


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH pve-qemu] vma_writer: Display more error information

2020-03-19 Thread Thomas Lamprecht
On 3/19/20 4:02 PM, Stefan Reiter wrote:
> Realizing that vmastat.errmsg already contains "vma_writer_register_stream" 
> and we're not losing any info here:
> 
> Reviewed-by: Stefan Reiter 
> 

With that: applied, thanks!

> On 19/03/2020 11:47, Dominic Jäger wrote:
>> Also print the reason why the function vma_writer_register_stream failed to
>> help debug errors like in [0].
>>
>> [0] 
>> https://forum.proxmox.com/threads/backup-error-vma_writer_register_stream-drive-scsi0-failed-pve-6-1-7.65925/
>>
>> Signed-off-by: Dominic Jäger 
>> ---
>>   .../0029-PVE-Backup-add-vma-backup-format-code.patch  | 11 ++-
>>   1 file changed, 6 insertions(+), 5 deletions(-) 



___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied-series: [PATCH qemu-server 1/2] Disable memory hotplugging for custom NUMA topologies

2020-03-19 Thread Thomas Lamprecht
On 3/19/20 2:19 PM, Alwin Antreich wrote:
> On Wed, Mar 18, 2020 at 04:18:44PM +0100, Stefan Reiter wrote:
>> This cannot work, since we adjust the 'memory' property of the VM config
>> on hotplugging, but then the user-defined NUMA topology won't match for
>> the next start attempt.
>>
>> Check needs to happen here, since it otherwise fails early with "total
>> memory for NUMA nodes must be equal to vm static memory".
>>
>> With this change the error message reflects what is actually happening
>> and doesn't allow VMs with exactly 1GB of RAM either.
>>
>> Signed-off-by: Stefan Reiter 
>> ---
> Tested-by: Alwin Antreich 

applied series, with your T-b tag, thanks to both!

> 
>>
>> Came up after investigating:
>> https://forum.proxmox.com/threads/task-error-total-memory-for-numa-nodes-must-be-equal-to-vm-static-memory.67251/
>>
>> Spent way too much time 'fixing' it before realizing that it can never work
>> like this anyway...
>>
>>  PVE/QemuServer/Memory.pm | 6 ++
>>  1 file changed, 6 insertions(+)
>>
>> diff --git a/PVE/QemuServer/Memory.pm b/PVE/QemuServer/Memory.pm
>> index d500b3b..ae9598b 100644
>> --- a/PVE/QemuServer/Memory.pm
>> +++ b/PVE/QemuServer/Memory.pm
>> @@ -225,6 +225,12 @@ sub config {
>>  if ($hotplug_features->{memory}) {
>>  die "NUMA needs to be enabled for memory hotplug\n" if !$conf->{numa};
>>  die "Total memory is bigger than ${MAX_MEM}MB\n" if $memory > $MAX_MEM;
>> +
>> +for (my $i = 0; $i < $MAX_NUMA; $i++) {
>> +die "cannot enable memory hotplugging with custom NUMA topology\n"
> s/hotplugging/hotplug/ or s/hotplugging/hot plugging/
> The word hotplugging doesn't seem to exist in the dictionaries.

I mean yes, but it was understandable enough (and maybe late enough when 
applying)
to not trigger a followup from me, if you want to send one I can take it in, 
though :)

> 
>> +if $conf->{"numa$i"};
>> +}
>> +
>>  my $sockets = 1;
>>  $sockets = $conf->{sockets} if $conf->{sockets};
>>  
>> -- 
>> 2.25.1
>>

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v2 storage 1/4] volume_has_feature: be aware that qcow2 does not work for lxc

2020-03-19 Thread Fabian Grünbichler
On March 19, 2020 1:37 pm, Fabian Ebner wrote:
> Relevant for the 'clone' feature, because Plugin.pm's clone_image
> always produces qcow2. Also fixed style for neighboring if/else block.
> 
> Signed-off-by: Fabian Ebner 
> ---
> 
> Previous discussion: 
> https://pve.proxmox.com/pipermail/pve-devel/2020-March/042472.html
> 
> Changes from v1:
> * As Fabian G. pointed out, templates are not impossible
>   on directory based storages, but linked cloning (currently)
>   is. So fix the storage backend and get rid of the wrong checks.
> 
> This solution doesn't need an API change. It does need
> PVE::Cluster which is used by list_images already.

I really don't like this - hence my comment on v1. I also did not like 
it for list_images ;) pve-storage should not be concerned with which 
guests support which formats.

if we don't want to extend has_feature with a target_format for 
copy/clone, then I'd rather we work around this in pve-container. it's a 
single call site there anyway.

> 
>  PVE/Storage/Plugin.pm | 14 ++
>  1 file changed, 10 insertions(+), 4 deletions(-)
> 
> diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
> index 2232261..8baa410 100644
> --- a/PVE/Storage/Plugin.pm
> +++ b/PVE/Storage/Plugin.pm
> @@ -844,13 +844,19 @@ sub volume_has_feature {
>  my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
>   $class->parse_volname($volname);
>  
> +my $vmlist = PVE::Cluster::get_vmlist();
> +my $vminfo = $vmlist->{ids}->{$vmid};
> +
>  my $key = undef;
> -if($snapname){
> -$key = 'snap';
> -}else{
> -$key =  $isBase ? 'base' : 'current';
> +if ($snapname) {
> + $key = 'snap';
> +} else {
> + $key = $isBase ? 'base' : 'current';
>  }
>  
> +# clone_images produces a qcow2 image
> +return 0 if defined($vminfo) && $vminfo->{type} eq 'lxc' && $feature eq 
> 'clone';
> +
>  return 1 if defined($features->{$feature}->{$key}->{$format});
>  
>  return undef;
> -- 
> 2.20.1
> 
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v2 qemu-server 3/3] migrate: add live-migration of replicated disks

2020-03-19 Thread Thomas Lamprecht
On 3/17/20 1:46 PM, Fabian Grünbichler wrote:
> On March 17, 2020 12:40 pm, Thomas Lamprecht wrote:
>> On 3/17/20 11:21 AM, Stefan Reiter wrote:
 +$local_volumes->{$opt} = $conf->{${opt}};
>>>
>>> Does $conf->{${opt}} have too many brackets or is this another arcane perl 
>>> syntax I've yet to discover? (iow. why not just $conf->{$opt} ?)
>>
>> It's not that arcane, you surely used it sometimes, for example:
>>
>> * in strings to separate variable from text: "${foo}bar"
>> * to treat a hashrefs element as hash: keys %{$foo->{bar}}
>> * to treat a hashrefs element as array: @{$foo->{bar}}
>>
>> so yes, it isn't useful here and would be nicer to just be $opt, but
>> ${opt} is the same thing :)
> 
> it's a typo (or a remnant of some other thing I had there before, not 
> sure).

that's clear, but the same thing nonetheless ;)


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH pve-qemu] vma_writer: Display more error information

2020-03-19 Thread Stefan Reiter
Realizing that vmastat.errmsg already contains 
"vma_writer_register_stream" and we're not losing any info here:


Reviewed-by: Stefan Reiter 

On 19/03/2020 11:47, Dominic Jäger wrote:

Also print the reason why the function vma_writer_register_stream failed to
help debug errors like in [0].

[0] 
https://forum.proxmox.com/threads/backup-error-vma_writer_register_stream-drive-scsi0-failed-pve-6-1-7.65925/

Signed-off-by: Dominic Jäger 
---
  .../0029-PVE-Backup-add-vma-backup-format-code.patch  | 11 ++-
  1 file changed, 6 insertions(+), 5 deletions(-)

diff --git 
a/debian/patches/pve/0029-PVE-Backup-add-vma-backup-format-code.patch 
b/debian/patches/pve/0029-PVE-Backup-add-vma-backup-format-code.patch
index c7a4275..0861a3f 100644
--- a/debian/patches/pve/0029-PVE-Backup-add-vma-backup-format-code.patch
+++ b/debian/patches/pve/0029-PVE-Backup-add-vma-backup-format-code.patch
@@ -8,9 +8,9 @@ Subject: [PATCH 29/32] PVE-Backup: add vma backup format code
   Makefile.objs |   1 +
   vma-reader.c  | 857 ++
   vma-writer.c  | 771 +
- vma.c | 837 
+ vma.c | 838 
   vma.h | 150 +
- 6 files changed, 2618 insertions(+), 1 deletion(-)
+ 6 files changed, 2619 insertions(+), 1 deletion(-)
   create mode 100644 vma-reader.c
   create mode 100644 vma-writer.c
   create mode 100644 vma.c
@@ -1694,7 +1694,7 @@ new file mode 100644
  index 00..a82752448a
  --- /dev/null
  +++ b/vma.c
-@@ -0,0 +1,837 @@
+@@ -0,0 +1,838 @@
  +/*
  + * VMA: Virtual Machine Archive
  + *
@@ -2330,6 +2330,7 @@ index 00..a82752448a
  +}
  +
  +int devcount = 0;
++VmaStatus vmastat;
  +while (optind < argc) {
  +const char *path = argv[optind++];
  +char *devname = NULL;
@@ -2347,7 +2348,8 @@ index 00..a82752448a
  +int dev_id = vma_writer_register_stream(vmaw, devname, size);
  +if (dev_id <= 0) {
  +unlink(archivename);
-+g_error("vma_writer_register_stream '%s' failed", devname);
++vma_writer_get_status(vmaw, &vmastat);
++g_error("error for device '%s': %s", devname, vmastat.errmsg);
  +}
  +
  +BackupJob *job = g_new0(BackupJob, 1);
@@ -2360,7 +2362,6 @@ index 00..a82752448a
  +qemu_coroutine_enter(co);
  +}
  +
-+VmaStatus vmastat;
  +int percent = 0;
  +int last_percent = -1;
  +



___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v2 qemu/qemu-server 0/4] live-migration with replicated disks

2020-03-19 Thread Stefan Reiter

Seems to work as advertised :) Thus, for the series:

Tested-by: Stefan Reiter 

On 17/03/2020 08:55, Fabian Grünbichler wrote:

I recently picked up and finished some work-in-progress patches for
adding bitmap support to drive-mirror (it got added to backup block jobs
in 4.0, with plenty of fixes in 4.1 and 4.2) and submitted them
upstream.

IMHO this is in a shape now where we can include it, but I'd also be
fine with hiding it behind an extra flag for now that we later switch to
default/always on.

qemu:

Fabian Grünbichler (1):
   add bitmap drive-mirror patches

  ...d-support-for-sync-bitmap-mode-never.patch |  443 +++
  ...-support-for-conditional-and-always-.patch |   83 +
  ...check-for-bitmap-mode-without-bitmap.patch |   33 +
  ...-to-bdrv_dirty_bitmap_merge_internal.patch |   45 +
  ...8-iotests-add-test-for-bitmap-mirror.patch | 3447 +
  .../0039-mirror-move-some-checks-to-qmp.patch |  275 ++
  debian/patches/series |6 +
  7 files changed, 4332 insertions(+)
  create mode 100644 
debian/patches/pve/0034-drive-mirror-add-support-for-sync-bitmap-mode-never.patch
  create mode 100644 
debian/patches/pve/0035-drive-mirror-add-support-for-conditional-and-always-.patch
  create mode 100644 
debian/patches/pve/0036-mirror-add-check-for-bitmap-mode-without-bitmap.patch
  create mode 100644 
debian/patches/pve/0037-mirror-switch-to-bdrv_dirty_bitmap_merge_internal.patch
  create mode 100644 
debian/patches/pve/0038-iotests-add-test-for-bitmap-mirror.patch
  create mode 100644 
debian/patches/pve/0039-mirror-move-some-checks-to-qmp.patch

qemu-server:

Fabian Grünbichler (3):
   drive-mirror: add support for incremental sync
   migrate: add replication info to disk overview
   migrate: add live-migration of replicated disks

  PVE/QemuMigrate.pm | 63 +-
  PVE/QemuServer.pm  | 15 ++-
  2 files changed, 71 insertions(+), 7 deletions(-)



___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v2 manager 2/2] gui/cluster: add structured peerLinks to join info

2020-03-19 Thread Dominik Csapak

some nits inline

On 3/17/20 12:11 PM, Stefan Reiter wrote:

Instead of the old 'ring_addr' property (which is kept for
compatibility), we also encode the link numbers into the new peerLinks
structure. This allows us to display which IP is assigned to which link
on the cluster in the join dialog, helping a user identify which link
should receive which interface on the new node.

Signed-off-by: Stefan Reiter 
---
  www/manager6/dc/Cluster.js  | 13 +
  www/manager6/dc/ClusterEdit.js  | 10 --
  www/manager6/dc/CorosyncLinkEdit.js | 18 --
  3 files changed, 33 insertions(+), 8 deletions(-)

diff --git a/www/manager6/dc/Cluster.js b/www/manager6/dc/Cluster.js
index 963da098..2407ab15 100644
--- a/www/manager6/dc/Cluster.js
+++ b/www/manager6/dc/Cluster.js
@@ -85,14 +85,18 @@ Ext.define('PVE.ClusterAdministration', {
return el.name === data.preferred_node;
});
  
-		var links = [];

-   PVE.Utils.forEachCorosyncLink(nodeinfo,
-   (num, link) => links.push(link));
+   var links = {};
+   var ring_addr = [];
+   PVE.Utils.forEachCorosyncLink(nodeinfo, (num, link) => {
+   links[num] = link;
+   ring_addr.push(link);
+   });
  
  		vm.set('preferred_node', {

name: data.preferred_node,
addr: nodeinfo.pve_addr,
-   ring_addr: links,
+   peerLinks: links,
+   ring_addr: ring_addr,
fp: nodeinfo.pve_fp
});
},
@@ -116,6 +120,7 @@ Ext.define('PVE.ClusterAdministration', {
joinInfo: {
ipAddress: vm.get('preferred_node.addr'),
fingerprint: vm.get('preferred_node.fp'),
+   peerLinks: vm.get('preferred_node.peerLinks'),
ring_addr: vm.get('preferred_node.ring_addr'),
totem: vm.get('totem')
}
diff --git a/www/manager6/dc/ClusterEdit.js b/www/manager6/dc/ClusterEdit.js
index 0820e4a0..4dd977c6 100644
--- a/www/manager6/dc/ClusterEdit.js
+++ b/www/manager6/dc/ClusterEdit.js
@@ -222,10 +222,16 @@ Ext.define('PVE.ClusterJoinNodeWindow', {
var links = [];
var interfaces = joinInfo.totem['interface'];
Ext.Array.each(Object.keys(interfaces), iface => {
+   var linkNumber = interfaces[iface]['linknumber'];
+   var peerLink;
+   if (joinInfo.peerLinks) {
+   peerLink = joinInfo.peerLinks[linkNumber];


i did not mention this in the last patch, but here its very obvious,
please use either style for object access:

foo['bar'] or foo.bar

i would tend to the latter, since we more often use it already,
but i am ok with both if its consistent

especially for things like
interfaces[iface]['linknumber'], it is not immediatly obvious what
is a variable and what is 'static' (especially so
since you use the result in a variable named linknumber
and use that again as property access :S)


+   }
+
links.push({
-   number: interfaces[iface]['linknumber'],
+   number: linkNumber,
value: '',
-   text: '',
+   text: peerLink ? Ext.String.format(gettext("peer's link 
address: {0}"), peerLink) : '',
allowBlank: false
});
});
diff --git a/www/manager6/dc/CorosyncLinkEdit.js 
b/www/manager6/dc/CorosyncLinkEdit.js
index 6fb35a0e..c1492453 100644
--- a/www/manager6/dc/CorosyncLinkEdit.js
+++ b/www/manager6/dc/CorosyncLinkEdit.js
@@ -14,7 +14,7 @@ Ext.define('PVE.form.CorosyncLinkEditorController', {
this.addLink();
  },
  
-addLink: function(number, value, allowBlank) {

+addLink: function(number, value, allowBlank, text) {
let me = this;
let view = me.getView();
let vm = view.getViewModel();
@@ -37,6 +37,7 @@ Ext.define('PVE.form.CorosyncLinkEditorController', {
allowBlankNetwork: allowBlank,
initNumber: number,
initNetwork: value,
+   text: text,
  
  	// needs to be set here, because we need to update the viewmodel

removeBtnHandler: function() {
@@ -130,6 +131,7 @@ Ext.define('PVE.form.CorosyncLinkSelector', {
  // values
  initNumber: 0,
  initNetwork: '',
+text: '',


weird indentation

  
  layout: 'hbox',

  bodyPadding: 5,
@@ -190,6 +192,17 @@ Ext.define('PVE.form.CorosyncLinkSelector', {
parent.removeBtnHandler();
}
}
+   },
+   {
+   xtype: 'label',
+   margi

Re: [pve-devel] [PATCH v2 manager 1/2] gui/cluster: add CorosyncLinkEdit component to support up to 8 links

2020-03-19 Thread Dominik Csapak

overall looks good, and most works, comments inline

the only 'real' issue i found is with ipv6 networks
(they do not get auto added to the links)

On 3/17/20 12:11 PM, Stefan Reiter wrote:

CorosyncLinkEdit is a Panel that contains between one and 8
CorosyncLinkSelectors. These can be added or removed with according
buttons.

Values submitted to the API are calculated by each
ProxmoxNetworkSelector itself. This works because ExtJS searches
recursively through all child components for ones with a value to be
submitted, i.e. the CorosyncLinkEdit and CorosyncLinkSelector components
are not part of data submission at all.

Change ClusterEdit.js to use the new component for cluster join and
create.

Signed-off-by: Stefan Reiter 
---
  www/manager6/Makefile   |   1 +
  www/manager6/dc/ClusterEdit.js  | 148 +-
  www/manager6/dc/CorosyncLinkEdit.js | 406 
  3 files changed, 478 insertions(+), 77 deletions(-)
  create mode 100644 www/manager6/dc/CorosyncLinkEdit.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index 41615430..0f2224af 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -224,6 +224,7 @@ JSSRC=  
\
dc/Cluster.js   \
dc/ClusterEdit.js   \
dc/PermissionView.js\
+   dc/CorosyncLinkEdit.js  \
Workspace.js
  
  lint: ${JSSRC}

diff --git a/www/manager6/dc/ClusterEdit.js b/www/manager6/dc/ClusterEdit.js
index e0fa5d9d..0820e4a0 100644
--- a/www/manager6/dc/ClusterEdit.js
+++ b/www/manager6/dc/ClusterEdit.js
@@ -25,24 +25,20 @@ Ext.define('PVE.ClusterCreateWindow', {
name: 'clustername'
},
{
-   xtype: 'proxmoxNetworkSelector',
-   fieldLabel: Ext.String.format(gettext('Link {0}'), 0),
-   emptyText: gettext("Optional, defaults to IP resolved by node's 
hostname"),
-   name: 'link0',
-   autoSelect: false,
-   valueField: 'address',
-   displayField: 'address',
-   skipEmptyText: true
-   }],
-   advancedItems: [{
-   xtype: 'proxmoxNetworkSelector',
-   fieldLabel: Ext.String.format(gettext('Link {0}'), 1),
-   emptyText: gettext("Optional second link for redundancy"),
-   name: 'link1',
-   autoSelect: false,
-   valueField: 'address',
-   displayField: 'address',
-   skipEmptyText: true
+   xtype: 'fieldcontainer',
+   fieldLabel: gettext("Cluster Links"),
+   style: 'padding-top: 5px',


i would rather write this either as:
style: {
 'padding-top':'5px',
},

(makes it easier to add styles later)

or:

padding: '5 0 0 0',
(we use it more often)


+   items: [
+   {
+   xtype: 'pveCorosyncLinkEditor',
+   style: 'padding-bottom: 5px',


same here


+   name: 'links'
+   },
+   {
+   xtype: 'label',
+   text: gettext("Multiple links are used as failover, lower 
numbers have higher priority.")
+   }
+   ]
}]
  }
  });
@@ -149,20 +145,10 @@ Ext.define('PVE.ClusterJoinNodeWindow', {
info: {
fp: '',
ip: '',
-   clusterName: '',
-   ring0Needed: false,
-   ring1Possible: false,
-   ring1Needed: false
+   clusterName: ''
}
},
formulas: {
-   ring0EmptyText: function(get) {
-   if (get('info.ring0Needed')) {
-   return gettext("Cannot use default address safely");
-   } else {
-   return gettext("Default: IP resolved by node's hostname");
-   }
-   },
submittxt: function(get) {
let cn = get('info.clusterName');
if (cn) {
@@ -188,9 +174,6 @@ Ext.define('PVE.ClusterJoinNodeWindow', {
change: 'recomputeSerializedInfo',
enable: 'resetField'
},
-   'proxmoxtextfield[name=ring1_addr]': {
-   enable: 'ring1Needed'
-   },
'textfield': {
disable: 'resetField'
}
@@ -198,47 +181,70 @@ Ext.define('PVE.ClusterJoinNodeWindow', {
resetField: function(field) {
field.reset();
},
-   ring1Needed: function(f) {
-   var vm = this.getViewModel();
-   f.allowBlank = !vm.get('info.ring1Needed');
-   },
onInputTypeChange: function(field, assistedInput) {
-   var vm = this.getViewModel();
+   var view = this.getView();
+   var linkEditor = view.down('#linkEditor');


since we have a controller, we can make use of
references and 'lookup' here

e.g. set a reference on an item:


Re: [pve-devel] [PATCH qemu-server 2/2] Die on misaligned memory for hotplugging

2020-03-19 Thread Alwin Antreich
On Wed, Mar 18, 2020 at 04:18:45PM +0100, Stefan Reiter wrote:
> ...instead of booting with an invalid config once and then silently
> changing the memory size for consequent VM starts.
> 
> Signed-off-by: Stefan Reiter 
> ---
Tested-by: Alwin Antreich 

> 
> This confused me for a bit, I don't think that's very nice behaviour as it
> stands.
> 
>  PVE/QemuServer/Memory.pm | 7 ++-
>  1 file changed, 2 insertions(+), 5 deletions(-)
> 
> diff --git a/PVE/QemuServer/Memory.pm b/PVE/QemuServer/Memory.pm
> index ae9598b..b7cf5d5 100644
> --- a/PVE/QemuServer/Memory.pm
> +++ b/PVE/QemuServer/Memory.pm
> @@ -321,11 +321,8 @@ sub config {
>   push @$cmd, "-object" , $mem_object;
>   push @$cmd, "-device", 
> "pc-dimm,id=$name,memdev=mem-$name,node=$numanode";
>  
> - #if dimm_memory is not aligned to dimm map
> - if($current_size > $memory) {
> -  $conf->{memory} = $current_size;
> -  PVE::QemuConfig->write_config($vmid, $conf);
> - }
> + die "memory size ($memory) must be aligned to $dimm_size for 
> hotplugging\n"
same nit as in my mail to path 1/2

> + if $current_size > $memory;
>   });
>  }
>  }
> -- 
> 2.25.1
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH qemu-server 1/2] Disable memory hotplugging for custom NUMA topologies

2020-03-19 Thread Alwin Antreich
On Wed, Mar 18, 2020 at 04:18:44PM +0100, Stefan Reiter wrote:
> This cannot work, since we adjust the 'memory' property of the VM config
> on hotplugging, but then the user-defined NUMA topology won't match for
> the next start attempt.
> 
> Check needs to happen here, since it otherwise fails early with "total
> memory for NUMA nodes must be equal to vm static memory".
> 
> With this change the error message reflects what is actually happening
> and doesn't allow VMs with exactly 1GB of RAM either.
> 
> Signed-off-by: Stefan Reiter 
> ---
Tested-by: Alwin Antreich 

> 
> Came up after investigating:
> https://forum.proxmox.com/threads/task-error-total-memory-for-numa-nodes-must-be-equal-to-vm-static-memory.67251/
> 
> Spent way too much time 'fixing' it before realizing that it can never work
> like this anyway...
> 
>  PVE/QemuServer/Memory.pm | 6 ++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/PVE/QemuServer/Memory.pm b/PVE/QemuServer/Memory.pm
> index d500b3b..ae9598b 100644
> --- a/PVE/QemuServer/Memory.pm
> +++ b/PVE/QemuServer/Memory.pm
> @@ -225,6 +225,12 @@ sub config {
>  if ($hotplug_features->{memory}) {
>   die "NUMA needs to be enabled for memory hotplug\n" if !$conf->{numa};
>   die "Total memory is bigger than ${MAX_MEM}MB\n" if $memory > $MAX_MEM;
> +
> + for (my $i = 0; $i < $MAX_NUMA; $i++) {
> + die "cannot enable memory hotplugging with custom NUMA topology\n"
s/hotplugging/hotplug/ or s/hotplugging/hot plugging/
The word hotplugging doesn't seem to exist in the dictionaries.

> + if $conf->{"numa$i"};
> + }
> +
>   my $sockets = 1;
>   $sockets = $conf->{sockets} if $conf->{sockets};
>  
> -- 
> 2.25.1
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] applied: [zsync] fix: check for incremental sync snapshot.

2020-03-19 Thread Wolfgang Link


> On March 19, 2020 1:05 PM Thomas Lamprecht  wrote:
> 
>  
> On 3/19/20 11:51 AM, Fabian Ebner wrote:
> > Hi,
> > this does fix an issue when the receiving side has the most recent 
> > snapshot, but not the 'old_snap' one. And of course testing for 'last_snap' 
> > is correct, since that one is the relevant one for the incremental sync. 
> > With that:
> > 
> > Reviewed-By: Fabian Ebner 
> > Tested-By: Fabian Ebner 
> 
> With that applied, thanks to all.
> Wolfgang you missed your signed-off-by (FYI).

Will do the next time.

> > 
> > Some ideas for further improvements:
> > * If on the destination there are older snapshots but not most recent one, 
> > pve-zsync tries to do a full sync and fails with: "destination has 
> > snapshots". One could get the list of snapshots from the destination, the 
> > list of snapshots from the source and use the most recent common one as the 
> > starting point for an incremental sync (and fail if both sides do have 
> > snapshots but no match).
> > * Knowing the list of snapshots for the destination could also be used to 
> > prevent issuing a useless remote 'zfs destroy' when the snapshot to be 
> > deleted does not exist for the destination.
> 
> Sounds reasonable, IMO.

I've already thought about it, but I wouldn't do it automatically.
More Like this 
pve-zsync cleanup: Removes all snapshots with rep_ on behalf of the source host 
that are not available on the target.
pve-zsync fixSync: searches for the last common synchronization point and 
removes all subsequent ones at the destination

One problem with automatic resolution is that multiple replications can 
accidentally delete snapshots.
> 
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 container 3/4] fix #1904: convert to base image when moving a volume of a template

2020-03-19 Thread Fabian Ebner
Signed-off-by: Fabian Ebner 
---
 src/PVE/API2/LXC.pm | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 205743c..c62955a 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -1818,7 +1818,13 @@ __PACKAGE__->register_method({
my $source_storage = 
PVE::Storage::parse_volume_id($old_volid);
my $movelimit = PVE::Storage::get_bandwidth_limit('move', 
[$source_storage, $storage], $bwlimit);
$new_volid = PVE::LXC::copy_volume($mpdata, $vmid, 
$storage, $storage_cfg, $conf, undef, $movelimit);
-   $mpdata->{volume} = $new_volid;
+   if (PVE::LXC::Config->is_template($conf)) {
+   PVE::Storage::activate_volumes($storage_cfg, [ 
$new_volid ]);
+   my $template_volid = 
PVE::Storage::vdisk_create_base($storage_cfg, $new_volid);
+   $mpdata->{volume} = $template_volid;
+   } else {
+   $mpdata->{volume} = $new_volid;
+   }
 
PVE::LXC::Config->lock_config($vmid, sub {
my $digest = $conf->{digest};
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 container 2/4] Rely on template_create to check whether creating a template is possible

2020-03-19 Thread Fabian Ebner
Signed-off-by: Fabian Ebner 
---
 src/PVE/API2/LXC.pm | 29 ++---
 1 file changed, 2 insertions(+), 27 deletions(-)

diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index a5aa5fc..205743c 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -440,13 +440,8 @@ __PACKAGE__->register_method({
# If the template flag was set, we try to convert again to 
template after restore
if ($was_template) {
print STDERR "Convert restored container to template...\n";
-   if (my $err = check_storage_supports_templates($conf)) {
-   warn $err;
-   warn "Leave restored backup as container instead of 
converting to template.\n"
-   } else {
-   PVE::LXC::template_create($vmid, $conf);
-   $conf->{template} = 1;
-   }
+   PVE::LXC::template_create($vmid, $conf);
+   $conf->{template} = 1;
}
PVE::LXC::Config->write_config($vmid, $conf);
};
@@ -468,22 +463,6 @@ __PACKAGE__->register_method({
return $rpcenv->fork_worker($workername, $vmid, $authuser, $realcmd);
 }});
 
-sub check_storage_supports_templates {
-my ($conf) = @_;
-
-my $scfg = PVE::Storage::config();
-eval {
-   PVE::LXC::Config->foreach_mountpoint($conf, sub {
-   my ($ms, $mp) = @_;
-
-   my ($sid) = PVE::Storage::parse_volume_id($mp->{volume}, 0);
-   die "Warning: Directory storage '$sid' does not support container 
templates!\n"
-   if $scfg->{ids}->{$sid}->{path};
-   });
-};
-return $@
-}
-
 __PACKAGE__->register_method({
 name => 'vmdiridx',
 path => '{vmid}',
@@ -1238,10 +1217,6 @@ __PACKAGE__->register_method({
die "you can't convert a CT to template if the CT is running\n"
if PVE::LXC::check_running($vmid);
 
-   if (my $err = check_storage_supports_templates($conf)) {
-   die $err;
-   }
-
my $realcmd = sub {
PVE::LXC::template_create($vmid, $conf);
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 storage 1/4] volume_has_feature: be aware that qcow2 does not work for lxc

2020-03-19 Thread Fabian Ebner
Relevant for the 'clone' feature, because Plugin.pm's clone_image
always produces qcow2. Also fixed style for neighboring if/else block.

Signed-off-by: Fabian Ebner 
---

Previous discussion: 
https://pve.proxmox.com/pipermail/pve-devel/2020-March/042472.html

Changes from v1:
* As Fabian G. pointed out, templates are not impossible
  on directory based storages, but linked cloning (currently)
  is. So fix the storage backend and get rid of the wrong checks.

This solution doesn't need an API change. It does need
PVE::Cluster which is used by list_images already.

 PVE/Storage/Plugin.pm | 14 ++
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index 2232261..8baa410 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -844,13 +844,19 @@ sub volume_has_feature {
 my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
$class->parse_volname($volname);
 
+my $vmlist = PVE::Cluster::get_vmlist();
+my $vminfo = $vmlist->{ids}->{$vmid};
+
 my $key = undef;
-if($snapname){
-$key = 'snap';
-}else{
-$key =  $isBase ? 'base' : 'current';
+if ($snapname) {
+   $key = 'snap';
+} else {
+   $key = $isBase ? 'base' : 'current';
 }
 
+# clone_images produces a qcow2 image
+return 0 if defined($vminfo) && $vminfo->{type} eq 'lxc' && $feature eq 
'clone';
+
 return 1 if defined($features->{$feature}->{$key}->{$format});
 
 return undef;
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 container 4/4] move_volume: if deleting old volume fails, add it as unused

2020-03-19 Thread Fabian Ebner
Like this it's clearer that the volume is still there. The warning
is easily missed.

Signed-off-by: Fabian Ebner 
---
 src/PVE/API2/LXC.pm | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index c62955a..fbafd8c 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -1858,7 +1858,14 @@ __PACKAGE__->register_method({
PVE::Storage::deactivate_volumes($storage_cfg, [ 
$old_volid ]);
PVE::Storage::vdisk_free($storage_cfg, $old_volid);
};
-   warn $@ if $@;
+   if (my $err = $@) {
+   warn $err;
+   PVE::LXC::Config->lock_config($vmid, sub {
+   my $conf = PVE::LXC::Config->load_config($vmid);
+   PVE::LXC::Config->add_unused_volume($conf, 
$old_volid);
+   PVE::LXC::Config->write_config($vmid, $conf);
+   });
+   }
}
};
my $err = $@;
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [zsync] fix: check for incremental sync snapshot.

2020-03-19 Thread Thomas Lamprecht
On 3/19/20 11:51 AM, Fabian Ebner wrote:
> Hi,
> this does fix an issue when the receiving side has the most recent snapshot, 
> but not the 'old_snap' one. And of course testing for 'last_snap' is correct, 
> since that one is the relevant one for the incremental sync. With that:
> 
> Reviewed-By: Fabian Ebner 
> Tested-By: Fabian Ebner 

With that applied, thanks to all.
Wolfgang you missed your signed-off-by (FYI).

> 
> Some ideas for further improvements:
> * If on the destination there are older snapshots but not most recent one, 
> pve-zsync tries to do a full sync and fails with: "destination has 
> snapshots". One could get the list of snapshots from the destination, the 
> list of snapshots from the source and use the most recent common one as the 
> starting point for an incremental sync (and fail if both sides do have 
> snapshots but no match).
> * Knowing the list of snapshots for the destination could also be used to 
> prevent issuing a useless remote 'zfs destroy' when the snapshot to be 
> deleted does not exist for the destination.

Sounds reasonable, IMO.


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH container 1/2] fix #1904: convert to base image when moving a volume of a template

2020-03-19 Thread Fabian Grünbichler
On March 19, 2020 10:43 am, Fabian Ebner wrote:
> On 19.03.20 09:01, Fabian Grünbichler wrote:
>> On March 18, 2020 2:02 pm, Fabian Ebner wrote:
>>> Signed-off-by: Fabian Ebner 
>>> ---
>>>
>>> For VMs this already happens.
>>>
>>> When adding volumes to templates no such conversion to
>>> base images happens yet (affects both VM/LXC). Because
>>> templates are more-or-less supposed to be read-only it
>>> probably makes sense to disallow adding volumes altogether.
>>> Or should we bother and do conversion instead?
>>>
>>> When a volume with linked clones is moved with 'delete source'
>>> then the volume will be copied over and the source volume does
>>> not get deleted, but the source volume disappears from the config.
>>> With the second patch it gets added as an unused volume instead.
>>>
>>>   src/PVE/API2/LXC.pm | 23 +++
>>>   1 file changed, 19 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
>>> index a5aa5fc..9eb52dc 100644
>>> --- a/src/PVE/API2/LXC.pm
>>> +++ b/src/PVE/API2/LXC.pm
>>> @@ -468,6 +468,13 @@ __PACKAGE__->register_method({
>>> return $rpcenv->fork_worker($workername, $vmid, $authuser, $realcmd);
>>>   }});
>>>   
>>> +sub assert_storeid_supports_templates {
>>> +my ($scfg, $sid) = @_;
>>> +
>>> +die "Warning: Directory storage '$sid' does not support container 
>>> templates!\n"
>>> +   if $scfg->{ids}->{$sid}->{path};
>>> +}
>>> +
>> 
>> so this lead me down quite a rabbit hole back to
>> 
>> https://bugzilla.proxmox.com/show_bug.cgi?id=1778
>> 
>> I think the original fix that lead to this "dir storages don't support
>> CT templates" was wrong - what we should do is error out early on
>> attempts to do a linked clone of a volume on a directory storage (as
>> that would create a qcow2 image, which containers don't support).
>> 
>> having a template on directory storage is perfectly fine - you just can
>> only do full clones, since .raw does not support anything else.
>> 
>> could you take a look at how involved cleaning this up would be?
>> 
> 
> I think the problem is that volume_has_feature cannot distinguish 
> between a raw volume of a VM and a raw volume of a container. And the 
> clone feature is not actually present in the latter case.
> 
> I see two approaches:
> One is to extend volume_has_feature with $vmtype (would be a 
> non-breaking API change).

if we do it there, I'd rather see it with $format (we don't want 
guest-specifics inside pve-storage if avoidable), and in this case we 
actually want to ask 'can we create a linked clone with format raw or 
subvol', not 'can we create a linked clone as container'.

> The other is doing an additional check in LXC's clone_vm and not only 
> rely on volume_has_feature.

this would be a valid workaround as well. first check whether cloning is 
possible in theory via has_feature, and then check if the storage is a 
dir-based storage and die accordingly.

the proper solution has the advantage that if we ever implement raw 
linked clone support on a dir based storage (however that would look 
like :-P), we'd just need to do it in pve-storage and pve-container 
would magically work. it might also be nice to have this API extension 
for other stuff/features, but from the top of my head I don't have any 
concrete example. if you can think of something, that would probably 
push the needle towards solution 1 ;)

> 
>>>   sub check_storage_supports_templates {
>>>   my ($conf) = @_;
>>>   
>>> @@ -477,8 +484,7 @@ sub check_storage_supports_templates {
>>> my ($ms, $mp) = @_;
>>>   
>>> my ($sid) = PVE::Storage::parse_volume_id($mp->{volume}, 0);
>>> -   die "Warning: Directory storage '$sid' does not support container 
>>> templates!\n"
>>> -   if $scfg->{ids}->{$sid}->{path};
>>> +   assert_storeid_supports_templates($scfg, $sid);
>>> });
>>>   };
>>>   return $@
>>> @@ -1805,6 +1811,8 @@ __PACKAGE__->register_method({
>>>   
>>> my ($mpdata, $old_volid);
>>>   
>>> +   my $storage_cfg = PVE::Storage::config();
>>> +
>>> PVE::LXC::Config->lock_config($vmid, sub {
>>> my $conf = PVE::LXC::Config->load_config($vmid);
>>> PVE::LXC::Config->check_lock($conf);
>>> @@ -1823,6 +1831,8 @@ __PACKAGE__->register_method({
>>> die "you can't move a volume with snapshots and delete the source\n"
>>> if $param->{delete} && 
>>> PVE::LXC::Config->is_volume_in_use_by_snapshots($conf, $old_volid);
>>>   
>>> +   assert_storeid_supports_templates($storage_cfg, $storage) if 
>>> PVE::LXC::Config->is_template($conf);
>>> +
>>> PVE::Tools::assert_if_modified($param->{digest}, $conf->{digest});
>>>   
>>> PVE::LXC::Config->set_lock($vmid, $lockname);
>>> @@ -1833,7 +1843,6 @@ __PACKAGE__->register_method({
>>> PVE::Cluster::log_msg('info', $authuser, "move volume CT $vmid: 
>>> move --volume $mpkey --storage $storage");
>>>   
>>> my $conf = PVE::

Re: [pve-devel] [zsync] fix: check for incremental sync snapshot.

2020-03-19 Thread Fabian Ebner

Hi,
this does fix an issue when the receiving side has the most recent 
snapshot, but not the 'old_snap' one. And of course testing for 
'last_snap' is correct, since that one is the relevant one for the 
incremental sync. With that:


Reviewed-By: Fabian Ebner 
Tested-By: Fabian Ebner 

Some ideas for further improvements:
* If on the destination there are older snapshots but not most recent 
one, pve-zsync tries to do a full sync and fails with: "destination has 
snapshots". One could get the list of snapshots from the destination, 
the list of snapshots from the source and use the most recent common one 
as the starting point for an incremental sync (and fail if both sides do 
have snapshots but no match).
* Knowing the list of snapshots for the destination could also be used 
to prevent issuing a useless remote 'zfs destroy' when the snapshot to 
be deleted does not exist for the destination.


On 18.03.20 07:51, Wolfgang Link wrote:

For an incremental sync you need the last_snap on both sides.
---
  pve-zsync | 13 -
  1 file changed, 4 insertions(+), 9 deletions(-)

diff --git a/pve-zsync b/pve-zsync
index ea3178e..893baf0 100755
--- a/pve-zsync
+++ b/pve-zsync
@@ -931,6 +931,7 @@ sub snapshot_destroy {
  }
  }
  
+# check if snapshot for incremental sync exist on dest side

  sub snapshot_exist {
  my ($source , $dest, $method, $dest_user) = @_;
  
@@ -940,22 +941,16 @@ sub snapshot_exist {
  
  my $path = $dest->{all};

  $path .= "/$source->{last_part}" if $source->{last_part};
-$path .= "\@$source->{old_snap}";
+$path .= "\@$source->{last_snap}";
  
  push @$cmd, $path;
  
-

-my $text = "";
-eval {$text =run_cmd($cmd);};
+eval {run_cmd($cmd)};
  if (my $erro =$@) {
warn "WARN: $erro";
return undef;
  }
-
-while ($text && $text =~ s/^(.*?)(\n|$)//) {
-   my $line =$1;
-   return 1 if $line =~ m/^.*$source->{old_snap}$/;
-}
+return 1;
  }
  
  sub send_image {




___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH pve-qemu] vma_writer: Display more error information

2020-03-19 Thread Dominic Jäger
Also print the reason why the function vma_writer_register_stream failed to
help debug errors like in [0].

[0] 
https://forum.proxmox.com/threads/backup-error-vma_writer_register_stream-drive-scsi0-failed-pve-6-1-7.65925/

Signed-off-by: Dominic Jäger 
---
 .../0029-PVE-Backup-add-vma-backup-format-code.patch  | 11 ++-
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git 
a/debian/patches/pve/0029-PVE-Backup-add-vma-backup-format-code.patch 
b/debian/patches/pve/0029-PVE-Backup-add-vma-backup-format-code.patch
index c7a4275..0861a3f 100644
--- a/debian/patches/pve/0029-PVE-Backup-add-vma-backup-format-code.patch
+++ b/debian/patches/pve/0029-PVE-Backup-add-vma-backup-format-code.patch
@@ -8,9 +8,9 @@ Subject: [PATCH 29/32] PVE-Backup: add vma backup format code
  Makefile.objs |   1 +
  vma-reader.c  | 857 ++
  vma-writer.c  | 771 +
- vma.c | 837 
+ vma.c | 838 
  vma.h | 150 +
- 6 files changed, 2618 insertions(+), 1 deletion(-)
+ 6 files changed, 2619 insertions(+), 1 deletion(-)
  create mode 100644 vma-reader.c
  create mode 100644 vma-writer.c
  create mode 100644 vma.c
@@ -1694,7 +1694,7 @@ new file mode 100644
 index 00..a82752448a
 --- /dev/null
 +++ b/vma.c
-@@ -0,0 +1,837 @@
+@@ -0,0 +1,838 @@
 +/*
 + * VMA: Virtual Machine Archive
 + *
@@ -2330,6 +2330,7 @@ index 00..a82752448a
 +}
 +
 +int devcount = 0;
++VmaStatus vmastat;
 +while (optind < argc) {
 +const char *path = argv[optind++];
 +char *devname = NULL;
@@ -2347,7 +2348,8 @@ index 00..a82752448a
 +int dev_id = vma_writer_register_stream(vmaw, devname, size);
 +if (dev_id <= 0) {
 +unlink(archivename);
-+g_error("vma_writer_register_stream '%s' failed", devname);
++vma_writer_get_status(vmaw, &vmastat);
++g_error("error for device '%s': %s", devname, vmastat.errmsg);
 +}
 +
 +BackupJob *job = g_new0(BackupJob, 1);
@@ -2360,7 +2362,6 @@ index 00..a82752448a
 +qemu_coroutine_enter(co);
 +}
 +
-+VmaStatus vmastat;
 +int percent = 0;
 +int last_percent = -1;
 +
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server] version_guard: early out when major/minor version is high enough

2020-03-19 Thread Stefan Reiter
E.g.: If a feature requires 4.1+pveN and we're using machine version 4.2
we don't need to increase the pve version to N (4.2+pve0 is enough).

We check this by doing a min_version call against a non-existant higher
pve-version for the major/minor tuple we want to test for, which can
only work if the major/minor alone is high enough.

Signed-off-by: Stefan Reiter 
---
 PVE/QemuServer.pm | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index e022141..a5b000e 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2962,6 +2962,8 @@ sub config_to_command {
 my $version_guard = sub {
my ($major, $minor, $pve) = @_;
return 0 if !min_version($machine_version, $major, $minor, $pve);
+   my $max_pve = 
PVE::QemuServer::Machine::get_pve_version("$major.$minor");
+   return 1 if min_version($machine_version, $major, $minor, $max_pve+1);
$required_pve_version = $pve if $pve && $pve > $required_pve_version;
return 1;
 };
-- 
2.25.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH container 1/2] fix #1904: convert to base image when moving a volume of a template

2020-03-19 Thread Fabian Ebner

On 19.03.20 09:01, Fabian Grünbichler wrote:

On March 18, 2020 2:02 pm, Fabian Ebner wrote:

Signed-off-by: Fabian Ebner 
---

For VMs this already happens.

When adding volumes to templates no such conversion to
base images happens yet (affects both VM/LXC). Because
templates are more-or-less supposed to be read-only it
probably makes sense to disallow adding volumes altogether.
Or should we bother and do conversion instead?

When a volume with linked clones is moved with 'delete source'
then the volume will be copied over and the source volume does
not get deleted, but the source volume disappears from the config.
With the second patch it gets added as an unused volume instead.

  src/PVE/API2/LXC.pm | 23 +++
  1 file changed, 19 insertions(+), 4 deletions(-)

diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index a5aa5fc..9eb52dc 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -468,6 +468,13 @@ __PACKAGE__->register_method({
return $rpcenv->fork_worker($workername, $vmid, $authuser, $realcmd);
  }});
  
+sub assert_storeid_supports_templates {

+my ($scfg, $sid) = @_;
+
+die "Warning: Directory storage '$sid' does not support container 
templates!\n"
+   if $scfg->{ids}->{$sid}->{path};
+}
+


so this lead me down quite a rabbit hole back to

https://bugzilla.proxmox.com/show_bug.cgi?id=1778

I think the original fix that lead to this "dir storages don't support
CT templates" was wrong - what we should do is error out early on
attempts to do a linked clone of a volume on a directory storage (as
that would create a qcow2 image, which containers don't support).

having a template on directory storage is perfectly fine - you just can
only do full clones, since .raw does not support anything else.

could you take a look at how involved cleaning this up would be?



I think the problem is that volume_has_feature cannot distinguish 
between a raw volume of a VM and a raw volume of a container. And the 
clone feature is not actually present in the latter case.


I see two approaches:
One is to extend volume_has_feature with $vmtype (would be a 
non-breaking API change).
The other is doing an additional check in LXC's clone_vm and not only 
rely on volume_has_feature.



  sub check_storage_supports_templates {
  my ($conf) = @_;
  
@@ -477,8 +484,7 @@ sub check_storage_supports_templates {

my ($ms, $mp) = @_;
  
  	my ($sid) = PVE::Storage::parse_volume_id($mp->{volume}, 0);

-   die "Warning: Directory storage '$sid' does not support container 
templates!\n"
-   if $scfg->{ids}->{$sid}->{path};
+   assert_storeid_supports_templates($scfg, $sid);
});
  };
  return $@
@@ -1805,6 +1811,8 @@ __PACKAGE__->register_method({
  
  	my ($mpdata, $old_volid);
  
+	my $storage_cfg = PVE::Storage::config();

+
PVE::LXC::Config->lock_config($vmid, sub {
my $conf = PVE::LXC::Config->load_config($vmid);
PVE::LXC::Config->check_lock($conf);
@@ -1823,6 +1831,8 @@ __PACKAGE__->register_method({
die "you can't move a volume with snapshots and delete the source\n"
if $param->{delete} && 
PVE::LXC::Config->is_volume_in_use_by_snapshots($conf, $old_volid);
  
+	assert_storeid_supports_templates($storage_cfg, $storage) if PVE::LXC::Config->is_template($conf);

+
PVE::Tools::assert_if_modified($param->{digest}, $conf->{digest});
  
  	PVE::LXC::Config->set_lock($vmid, $lockname);

@@ -1833,7 +1843,6 @@ __PACKAGE__->register_method({
PVE::Cluster::log_msg('info', $authuser, "move volume CT $vmid: move 
--volume $mpkey --storage $storage");
  
  		my $conf = PVE::LXC::Config->load_config($vmid);

-   my $storage_cfg = PVE::Storage::config();
  
  		my $new_volid;
  
@@ -1843,7 +1852,13 @@ __PACKAGE__->register_method({

my $source_storage = 
PVE::Storage::parse_volume_id($old_volid);
my $movelimit = PVE::Storage::get_bandwidth_limit('move', 
[$source_storage, $storage], $bwlimit);
$new_volid = PVE::LXC::copy_volume($mpdata, $vmid, 
$storage, $storage_cfg, $conf, undef, $movelimit);
-   $mpdata->{volume} = $new_volid;
+   if (PVE::LXC::Config->is_template($conf)) {
+   PVE::Storage::activate_volumes($storage_cfg, 
[$new_volid]);
+   my $template_volid = 
PVE::Storage::vdisk_create_base($storage_cfg, $new_volid);
+   $mpdata->{volume} = $template_volid;
+   } else {
+   $mpdata->{volume} = $new_volid;
+   }
  
  		PVE::LXC::Config->lock_config($vmid, sub {

my $digest = $conf->{digest};
--
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel




___

[pve-devel] applied: [PATCH container v2] lxc_config: mount /sys as mixed for unprivileged by default

2020-03-19 Thread Wolfgang Bumiller

applied

On 3/18/20 10:46 AM, Thomas Lamprecht wrote:

CONTAINER_INTERFACE[0] is something systemd people call their API and
we need to adapt to it a bit, even if it means doing stupid
unnecessary things, as else systemd decides to regress and suddenly
break network stack in CT after an upgrade[1].

This mounts the parent /sys as mixed, which is:

mount /sys as read-only but with /sys/devices/virtual/net writable.

-- man 5 lxc.container.conf

Allow users to overwrite that with a features knob, as surely some
run into other issues else and manually adding a "lxc.mount.auto"
entry in the container .conf is not an nice user experience for most.

Fixes the system regression in up to date Arch installations
introduced by[2].

[0]: https://systemd.io/CONTAINER_INTERFACE/
[1]: https://github.com/systemd/systemd/issues/15101#issuecomment-598607582
[2]: 
https://github.com/systemd/systemd/commit/bf331d87171b7750d1c72ab0b140a240c0cf32c3

Signed-off-by: Thomas Lamprecht 
---

changes v1 -> v2:
* use sys:mixed and only do this for upriv. CTs
* add knob to allow easier opting out of this

  src/PVE/LXC.pm| 6 ++
  src/PVE/LXC/Config.pm | 7 +++
  2 files changed, 13 insertions(+)

diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index 0742a53..df52afa 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -662,6 +662,12 @@ sub update_lxc_config {
$raw .= "lxc.mount.entry = /dev/fuse dev/fuse none bind,create=file 0 
0\n";
  }
  
+if ($unprivileged && !$features->{force_rw_sys}) {

+   # unpriv. CT default to sys:rw, but that doesn't always plays well with
+   # systemd, e.g., systemd-networkd 
https://systemd.io/CONTAINER_INTERFACE/
+   $raw .= "lxc.mount.auto = sys:mixed\n";
+}
+
  # WARNING: DO NOT REMOVE this without making sure that loop device nodes
  # cannot be exposed to the container with r/w access (cgroup perms).
  # When this is enabled mounts will still remain in the monitor's namespace
diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
index e88ba0b..0909773 100644
--- a/src/PVE/LXC/Config.pm
+++ b/src/PVE/LXC/Config.pm
@@ -331,6 +331,13 @@ my $features_desc = {
." This requires a kernel with seccomp trap to user space support (5.3 
or newer)."
." This is experimental.",
  },
+force_rw_sys => {
+   optional => 1,
+   type => 'boolean',
+   default => 0,
+   description => "Mount /sys in unprivileged containers as `rw` instead of 
`mixed`."
+   ." This can break networking under newer (>= v245) systemd-network 
use."
+},
  };
  
  my $confdesc = {





___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH container 1/2] fix #1904: convert to base image when moving a volume of a template

2020-03-19 Thread Fabian Grünbichler
On March 18, 2020 2:02 pm, Fabian Ebner wrote:
> Signed-off-by: Fabian Ebner 
> ---
> 
> For VMs this already happens.
> 
> When adding volumes to templates no such conversion to
> base images happens yet (affects both VM/LXC). Because
> templates are more-or-less supposed to be read-only it
> probably makes sense to disallow adding volumes altogether.
> Or should we bother and do conversion instead?
> 
> When a volume with linked clones is moved with 'delete source'
> then the volume will be copied over and the source volume does
> not get deleted, but the source volume disappears from the config.
> With the second patch it gets added as an unused volume instead.
> 
>  src/PVE/API2/LXC.pm | 23 +++
>  1 file changed, 19 insertions(+), 4 deletions(-)
> 
> diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
> index a5aa5fc..9eb52dc 100644
> --- a/src/PVE/API2/LXC.pm
> +++ b/src/PVE/API2/LXC.pm
> @@ -468,6 +468,13 @@ __PACKAGE__->register_method({
>   return $rpcenv->fork_worker($workername, $vmid, $authuser, $realcmd);
>  }});
>  
> +sub assert_storeid_supports_templates {
> +my ($scfg, $sid) = @_;
> +
> +die "Warning: Directory storage '$sid' does not support container 
> templates!\n"
> + if $scfg->{ids}->{$sid}->{path};
> +}
> +

so this lead me down quite a rabbit hole back to 

https://bugzilla.proxmox.com/show_bug.cgi?id=1778

I think the original fix that lead to this "dir storages don't support 
CT templates" was wrong - what we should do is error out early on 
attempts to do a linked clone of a volume on a directory storage (as 
that would create a qcow2 image, which containers don't support).

having a template on directory storage is perfectly fine - you just can 
only do full clones, since .raw does not support anything else.

could you take a look at how involved cleaning this up would be?

>  sub check_storage_supports_templates {
>  my ($conf) = @_;
>  
> @@ -477,8 +484,7 @@ sub check_storage_supports_templates {
>   my ($ms, $mp) = @_;
>  
>   my ($sid) = PVE::Storage::parse_volume_id($mp->{volume}, 0);
> - die "Warning: Directory storage '$sid' does not support container 
> templates!\n"
> - if $scfg->{ids}->{$sid}->{path};
> + assert_storeid_supports_templates($scfg, $sid);
>   });
>  };
>  return $@
> @@ -1805,6 +1811,8 @@ __PACKAGE__->register_method({
>  
>   my ($mpdata, $old_volid);
>  
> + my $storage_cfg = PVE::Storage::config();
> +
>   PVE::LXC::Config->lock_config($vmid, sub {
>   my $conf = PVE::LXC::Config->load_config($vmid);
>   PVE::LXC::Config->check_lock($conf);
> @@ -1823,6 +1831,8 @@ __PACKAGE__->register_method({
>   die "you can't move a volume with snapshots and delete the source\n"
>   if $param->{delete} && 
> PVE::LXC::Config->is_volume_in_use_by_snapshots($conf, $old_volid);
>  
> + assert_storeid_supports_templates($storage_cfg, $storage) if 
> PVE::LXC::Config->is_template($conf);
> +
>   PVE::Tools::assert_if_modified($param->{digest}, $conf->{digest});
>  
>   PVE::LXC::Config->set_lock($vmid, $lockname);
> @@ -1833,7 +1843,6 @@ __PACKAGE__->register_method({
>   PVE::Cluster::log_msg('info', $authuser, "move volume CT $vmid: 
> move --volume $mpkey --storage $storage");
>  
>   my $conf = PVE::LXC::Config->load_config($vmid);
> - my $storage_cfg = PVE::Storage::config();
>  
>   my $new_volid;
>  
> @@ -1843,7 +1852,13 @@ __PACKAGE__->register_method({
>   my $source_storage = 
> PVE::Storage::parse_volume_id($old_volid);
>   my $movelimit = PVE::Storage::get_bandwidth_limit('move', 
> [$source_storage, $storage], $bwlimit);
>   $new_volid = PVE::LXC::copy_volume($mpdata, $vmid, 
> $storage, $storage_cfg, $conf, undef, $movelimit);
> - $mpdata->{volume} = $new_volid;
> + if (PVE::LXC::Config->is_template($conf)) {
> + PVE::Storage::activate_volumes($storage_cfg, 
> [$new_volid]);
> + my $template_volid = 
> PVE::Storage::vdisk_create_base($storage_cfg, $new_volid);
> + $mpdata->{volume} = $template_volid;
> + } else {
> + $mpdata->{volume} = $new_volid;
> + }
>  
>   PVE::LXC::Config->lock_config($vmid, sub {
>   my $digest = $conf->{digest};
> -- 
> 2.20.1
> 
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel