On April 14, 2020 2:02 pm, Fabian Ebner wrote:
> Signed-off-by: Fabian Ebner
> ---
>
> This is the behavior VMs have. Check for the owner and
> if it's a volume happens in delete_mountpoint_volume.
>
> src/PVE/LXC.pm | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/src/
potential for follow-up inline
On April 14, 2020 2:02 pm, Fabian Ebner wrote:
> by extending the description. Also fixes the relevant for loop to
> iterate over MAX_UNUSED_DISKS instead of MAX_MOUNT_POINTS.
>
> Signed-off-by: Fabian Ebner
> ---
> src/PVE/LXC/Config.pm | 24 +
On April 14, 2020 11:45 am, Mira Limbeck wrote:
> Looks good to me.
>
> Reviewed-By: Mira Limbeck
> Tested-By: Mira Limbeck
>
> On 4/14/20 10:51 AM, Fabian Grünbichler wrote:
>> fixing the following two issues:
>> - the legacy code path was never converted to th
starts a new connection over that single
socket.
I took the liberty of renaming the variables/keys since I found
'tunnel_addr' and 'sock_addr' rather confusing.
Signed-off-by: Fabian Grünbichler
---
tested:
- migration with multiple disks with and without replication
- same,
On April 10, 2020 11:25 am, Thomas Lamprecht wrote:
> On 4/10/20 11:21 AM, Fabian Grünbichler wrote:
>> On April 10, 2020 11:14 am, Thomas Lamprecht wrote:
>>> On 4/10/20 10:34 AM, Fabian Grünbichler wrote:
>>>> + a follow up to fix typoed 'expeted' in
On April 10, 2020 11:14 am, Thomas Lamprecht wrote:
> On 4/10/20 10:34 AM, Fabian Grünbichler wrote:
>> + a follow up to fix typoed 'expeted' in ReplicationTestEnv.pm
>>
>
> that should not been applied now as guest-common is not yet bumped, so
> this breaks
+ a follow up to fix typoed 'expeted' in ReplicationTestEnv.pm
On April 10, 2020 10:27 am, Dominic Jäger wrote:
> pve-guest-common got a new log line [0] for rate and transport type of a
> replication. This line must be added to the replication tests.
>
> [0] e90f586aab5caad4d4c5e18711316e8dc5225
thanks for the prompt follow-up!
On April 8, 2020 1:40 pm, Fabian Ebner wrote:
> Signed-off-by: Fabian Ebner
> ---
>
> Took the opportunity to document the whole function
> instead of just the new valid_target_formats option.
>
> PVE/Storage.pm | 15 +++
> 1 file changed, 15 insert
On April 1, 2020 12:44 pm, Oguz Bektas wrote:
> this adds the 'reinstall' flag, which is a special forced restore
> (overwrites an existing container with chosen template)
isn't this backwards? a reinstall should be a special force create
(based on the current config, instead of an empty one), no
thanks!
On April 7, 2020 2:25 pm, Aaron Lauterer wrote:
> Getting the volume sizes as byte values instead of converted to human
> readable units helps to avoid rounding errors in the further processing
> if the volume size is more on the odd side.
>
> The `zfs list` command supports the -p(arseab
except for #18, which could be simplified together with a follow-up on
#14.
thanks for the patience and following through with this one!
On April 8, 2020 11:24 am, Fabian Ebner wrote:
> Previous discussion here: [0].
>
> This series aims to allow offline migration with '--targetstorage'
> and i
small comment in-line
On April 8, 2020 11:24 am, Fabian Ebner wrote:
> It was necessary to move foreach_volid back to QemuServer.pm
>
> In VZDump/QemuServer.pm and QemuMigrate.pm the dependency on
> QemuConfig.pm was already there, just the explicit "use" was missing.
>
> Signed-off-by: Fabian E
On April 8, 2020 11:25 am, Fabian Ebner wrote:
> Call cleanup_remotedisks in phase1_cleanup as well, because that's where
> we end if sync_disks fails and some disks might already have been
> transfered successfully.
>
> Signed-off-by: Fabian Ebner
> ---
> PVE/QemuMigrate.pm | 30 ++
small nit inline
On April 8, 2020 11:25 am, Fabian Ebner wrote:
> Use 'update_volume_ids' for the live-migrated disks as well.
>
> Signed-off-by: Fabian Ebner
> ---
> PVE/QemuMigrate.pm | 26 ++
> 1 file changed, 10 insertions(+), 16 deletions(-)
>
> diff --git a/PVE/Qe
On April 7, 2020 2:19 pm, Fabian Ebner wrote:
> On 06.04.20 11:52, Fabian Grünbichler wrote:
>> On April 6, 2020 10:46 am, Fabian Ebner wrote:
>>> On 27.03.20 10:09, Fabian Grünbichler wrote:
>>>> with a small follow-up for vzdump (see separate mail), and comment bel
On April 7, 2020 11:28 am, Aaron Lauterer wrote:
>
>
> On 4/3/20 5:07 PM, Fabian Grünbichler wrote:
>> there's another instance of 'zfs list ...' in PVE::Storage that could
>> also be switched to '-p'
>
> Which one do you mean? Line 610
On April 6, 2020 10:46 am, Fabian Ebner wrote:
> On 27.03.20 10:09, Fabian Grünbichler wrote:
>> with a small follow-up for vzdump (see separate mail), and comment below
>>
>> On March 23, 2020 12:18 pm, Fabian Ebner wrote:
>>> With the option valid_target_format
there's another instance of 'zfs list ...' in PVE::Storage that could
also be switched to '-p'
On April 3, 2020 2:29 pm, Aaron Lauterer wrote:
> Getting the volume sizes as byte values instead of converted to human
> readable units helps to avoid rounding errors in the further processing
> if the
On April 2, 2020 6:34 pm, Thomas Lamprecht wrote:
> On 3/30/20 1:41 PM, Fabian Grünbichler wrote:
>> the syntax is backwards compatible, providing a single storage ID or '1'
>> works like before. the new helper ensures consistent behaviour at all
>> call sites.
>
otherwise VMA files passed in as paths instead of as volids don't
work anymore.
Signed-off-by: Fabian Grünbichler
---
I hope this is the last one ;)
PVE/API2/Qemu.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 396879a..6f
On April 1, 2020 5:39 pm, Thomas Lamprecht wrote:
> On 3/30/20 1:41 PM, Fabian Grünbichler wrote:
>> generalized from the start to support extension to bridges or other
>> entities as well.
>>
>> this gets us incremental support for the CLI, e.g.:
>>
>>
order between this and 9 is wrong, and this introduces a circular
dependency. extract_challenge would probably need to go into
Challenge.pm? that way, new challenge types that don't need this/do it
differently can override it as well..
On March 31, 2020 12:08 pm, Wolfgang Link wrote:
> Signed-o
On March 31, 2020 12:08 pm, Wolfgang Link wrote:
> This plugin calls the custom script acme.sh and uses the implementation of
> the DNS API.
>
> Signed-off-by: Wolfgang Link
> ---
> debian/control | 3 +-
> src/Makefile | 1 +
> src/PVE/ACME.pm |
On March 31, 2020 12:08 pm, Wolfgang Link wrote:
> At the moment, Proxmox has two different configurations that require
> different properties.
> DNSChallange requires credentials for the DNSAPI.
> Standalone has no settings because Letsencrypt only supports port 80 with the
> http-01 challenge.
On March 31, 2020 12:08 pm, Wolfgang Link wrote:
> These are the two main functions that a plugin should offer.
> Setup creates the endpoint at which Letsencrypt does the validation, teardown
> does the cleanup.
>
> Signed-off-by: Wolfgang Link
> ---
> src/PVE/ACME.pm| 43 ++
I think this is somehow leftover and should be dropped/merged with other
patches?
On March 31, 2020 12:08 pm, Wolfgang Link wrote:
> Signed-off-by: Wolfgang Link
> ---
> PVE/CLI/pvenode.pm | 8 +++-
> 1 file changed, 3 insertions(+), 5 deletions(-)
>
> diff --git a/PVE/CLI/pvenode.pm b/PVE
On March 31, 2020 12:08 pm, Wolfgang Link wrote:
> With this configuration it is possible to use many different plugins
> with different providers and users.
>
> Signed-off-by: Wolfgang Link
> ---
> PVE/API2/ACMEPlugin.pm | 120 +
> PVE/API2/Cluster.pm
On March 31, 2020 12:08 pm, Wolfgang Link wrote:
> Copy the DNS plugins form acme.sh
>
> The project acme.sh can be found here.
> https://github.com/Neilpang/acme.sh
>
> Signed-off-by: Wolfgang Link
> ---
> .gitmodules | 3 ++
> Makefile | 10 -
> acme.sh | 1 +
> src/Makefil
note: this one requires a breaks+replaces on the other side
(proxmox-acme), and a version bump here (so that proxmox-acme can have
an appropriate versioned depends).
since the other two pve-common are independent I already applied them -
otherwise this one should have probably been 3/3 ;)
On M
On March 31, 2020 12:08 pm, Wolfgang Link wrote:
> Signed-off-by: Wolfgang Link
> ---
> src/PVE/JSONSchema.pm | 7 +++
> 1 file changed, 7 insertions(+)
>
> diff --git a/src/PVE/JSONSchema.pm b/src/PVE/JSONSchema.pm
> index 3eb38eb..0c655bc 100644
> --- a/src/PVE/JSONSchema.pm
> +++ b/src/PV
On March 31, 2020 12:08 pm, Wolfgang Link wrote:
> Signed-off-by: Wolfgang Link
> ---
> .gitignore | 5 +
> Makefile | 40
> debian/changelog | 5 +
> debian/compat| 1 +
> debian/control
On March 31, 2020 12:08 pm, Wolfgang Link wrote:
> We use these functions to add and remove a txt record via the dnsapi.
>
> Signed-off-by: Wolfgang Link
> ---
> src/proxmox-acme | 68
> 1 file changed, 68 insertions(+)
>
> diff --git a/src/proxm
some more comments where this gets actually used later on!
On March 31, 2020 12:08 pm, Wolfgang Link wrote:
> Allow more than one domain entry, but only one domain per entry is allowed.
> Before that, the Acme parameter could have multiple domains.
>
> Signed-off-by: Wolfgang Link
> ---
> PVE/N
On March 31, 2020 12:08 pm, Wolfgang Link wrote:
> acme.sh DNS plugins expect a configuration in which the login information
> is stored.
> We pass the credentials with the command.
> This function supports the expected behavior of the plugins.
>
> Signed-off-by: Wolfgang Link
> ---
> src/proxmo
On March 31, 2020 12:08 pm, Wolfgang Link wrote:
> The storage_id is the same as the plugin_id.
>
> Signed-off-by: Wolfgang Link
> ---
> src/PVE/JSONSchema.pm | 13 +
> 1 file changed, 9 insertions(+), 4 deletions(-)
>
> diff --git a/src/PVE/JSONSchema.pm b/src/PVE/JSONSchema.pm
> i
On March 31, 2020 12:08 pm, Wolfgang Link wrote:
> Signed-off-by: Wolfgang Link
> ---
> PVE/API2/ACME.pm | 16 +--
> PVE/NodeConfig.pm | 67 +--
> 2 files changed, 73 insertions(+), 10 deletions(-)
>
> diff --git a/PVE/API2/ACME.pm b/PVE/API2/
some high-level feedback:
- looks much better than previous version already, I think v3 should be
the one that makes it :)
- just upgrading triggers the following error on order/renew:
Loading ACME account details
Placing ACME order
Order URL: https://acme-staging-v02.api.letsencrypt.org/acme/
On March 31, 2020 12:08 pm, Wolfgang Link wrote:
> Signed-off-by: Wolfgang Link
> ---
> data/PVE/Cluster.pm | 1 +
> data/src/status.c | 1 +
> 2 files changed, 2 insertions(+)
>
> diff --git a/data/PVE/Cluster.pm b/data/PVE/Cluster.pm
> index 068d626..b410003 100644
> --- a/data/PVE/Cluster.p
On March 31, 2020 10:38 am, Frans Fürst wrote:
> Hi, I'm not sure whether this is a `pve-devel` or rather a `pve-user`
> question, but since I'm developing against the Proxmox VE API I'm trying it
> here first.
both are fine I'd say ;)
> I want to get a status for all configured backup jobs via
in addition to printing it. preparation for remote cluster migration,
where we want to return this in a structured fashion over the migration
tunnel instead of parsing stdout via SSH.
Signed-off-by: Fabian Grünbichler
---
PVE/QemuServer.pm | 18 +++---
1 file changed, 15 insertions
the syntax is backwards compatible, providing a single storage ID or '1'
works like before. the new helper ensures consistent behaviour at all
call sites.
Signed-off-by: Fabian Grünbichler
---
Notes:
needs a versioned dep on pve-common with the new format and parse_idmap
PVE/AP
to start breaking up vm_start before extending parts for new migration
features like storage and network mapping.
Signed-off-by: Fabian Grünbichler
---
Notes:
best viewed with -w ;)
PVE/QemuServer.pm | 572 +++---
1 file changed, 291 insertions
generalized from the start to support extension to bridges or other
entities as well.
this gets us incremental support for the CLI, e.g.:
--targetstorage foo:bar --targetstorage bar:baz --targetstorage foo
creates a mapping of
foo=>bar
bar=>baz
with a default of foo
Signed-off-by:
as preparation of targetstorage mapping and remote migration. this also
removes re-using of the $local_volumes hash in the original code.
Signed-off-by: Fabian Grünbichler
---
PVE/QemuServer.pm | 134 --
1 file changed, 71 insertions(+), 63 deletions
into one sub that retrieves the local disks, and the actual NBD
allocation. that way, remote incoming migration can just call the NBD
allocation with a custom list of volume names/storages/..
Signed-off-by: Fabian Grünbichler
---
PVE/QemuServer.pm | 52
as preparation for refactoring it further. remote migration will add
another 1-2 parameters, and it is already unwieldly enough as it is.
Signed-off-by: Fabian Grünbichler
---
PVE/API2/Qemu.pm | 23
PVE/QemuConfig.pm| 6 -
PVE/QemuServer.pm| 47
both where previously missing. the existing 'check_storage_access'
helper is not applicable here since it operates on a full set of VM
config options, not just storage IDs.
Signed-off-by: Fabian Grünbichler
---
PVE/API2/Qemu.pm | 16 ++--
1 file changed, 14 insert
, and use
the returned hash instead of parsing STDOUT (these can be skipped for
now, it's just a hint at where I am going ;))
pve-common is just the mapping schema stuff, already put there so that
we can re-use it in pve-container for migration as well.
pve-common:
Fabian Grünbichler (1):
to also handle cases where disk allocation failed in the remote
vm_start, and we only have a bitmap but no target drive information.
Signed-off-by: Fabian Grünbichler
---
PVE/QemuMigrate.pm | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/PVE/QemuMigrate.pm b/PVE
in addition to 1 and 2 I applied:
3-7, 9 + versioned depends / breaks accordingly between
qemu-server/pve-container and pve-guest-common
the rest reads a rebase anyway - and possibly a slightly bigger one in
case my vm_start series goes in faster ;)
On March 26, 2020 9:09 am, Fabian Ebner wrot
On March 26, 2020 9:09 am, Fabian Ebner wrote:
> Signed-off-by: Fabian Ebner
> ---
> PVE/Storage.pm | 5 +
> 1 file changed, 5 insertions(+)
>
> diff --git a/PVE/Storage.pm b/PVE/Storage.pm
> index a46550c..7af1fc3 100755
> --- a/PVE/Storage.pm
> +++ b/PVE/Storage.pm
> @@ -575,6 +575,11 @@ s
On March 26, 2020 9:09 am, Fabian Ebner wrote:
> This makes sure that live migration also respects content types.
>
> Signed-off-by: Fabian Ebner
> ---
> PVE/QemuMigrate.pm | 6 ++
> 1 file changed, 6 insertions(+)
>
> diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
> index 9cff64d..f2
with a small follow-up for vzdump (see separate mail), and comment below
On March 23, 2020 12:18 pm, Fabian Ebner wrote:
> With the option valid_target_formats it's possible
> to let the caller specify possible formats for the target
> of an operation.
> [0]: If the option is not set, assume that
since immutable .raw base volumes cannot be mounted RW.
Signed-off-by: Fabian Grünbichler
---
src/PVE/VZDump/LXC.pm | 5 +
1 file changed, 5 insertions(+)
diff --git a/src/PVE/VZDump/LXC.pm b/src/PVE/VZDump/LXC.pm
index 25a50d1..09c4d47 100644
--- a/src/PVE/VZDump/LXC.pm
+++ b/src/PVE
this does not currently trigger since nothing uses $self->{target_drive}
afterwards.
Signed-off-by: Fabian Grünbichler
---
Notes:
new in v2
triggered by v2 of next patch
PVE/QemuMigrate.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/QemuMigrate.p
-by: Fabian Grünbichler
---
Notes:
v1->v2:
- communicate re-used disks back to source node for abort in case target
node is too old
the original bug effectively makes qemu-server 6.1-10 and 6.1-11 very
broken qemu-server versions
PVE/API2/Qemu.pm | 5 -
s,
thus blocking the disk cleanup.
Signed-off-by: Fabian Grünbichler
---
Notes:
v1->v2:
- also cleanup remote disks that might have been allocated
- use new tracked replicated_volumes in cleanup instead of bitmap existence
PVE/QemuMigrate.pm | 25 ++---
1 file
a newly allocated disk, while leaving the actual base as
unused, unreferenced disk. I'll send a v2 ;)
On March 26, 2020 10:10 am, Fabian Grünbichler wrote:
> by only checking for replicatable volumes when a replication job is
> defined, and passing only actually replicated volumes t
since $self->{storage_migration} is not (yet) set at that point, but
bitmaps have been created.
Signed-off-by: Fabian Grünbichler
---
PVE/QemuMigrate.pm | 9 +
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index ef5f6fd..023c
by only checking for replicatable volumes when a replication job is
defined, and passing only actually replicated volumes to the target node
via STDIN.
otherwise this can pick up theoretically replicatable, but not actually
replicated volumes and treat them wrong.
Signed-off-by: Fabian
thanks for catching this! seems like I just tested without replication, but
not without replicatable disks..
On March 25, 2020 1:18 pm, Fabian Ebner wrote:
> There is a need to set $noerr, because otherwise migration for a
> VM with a non-replicatable volume fails with:
> missing replicate featur
On March 25, 2020 11:29 am, Thomas Lamprecht wrote:
> On 3/19/20 3:42 PM, Stefan Reiter wrote:
>> Seems to work as advertised :) Thus, for the series:
>>
>> Tested-by: Stefan Reiter
>
> with that: applied series, thanks!
>
> We definitively need to handle the UI UX better for this, add info to
On March 20, 2020 5:28 pm, Stoiko Ivanov wrote:
> With the recent upload of 0.8.3-2 to sid, 2 functional changes were included,
> which might make sense for our users as well:
> * preserving zfs-zed snippet config across upgrades (enabled via symlinks in
> /etc, they get restored on upgrade even
On March 19, 2020 1:37 pm, Fabian Ebner wrote:
> Relevant for the 'clone' feature, because Plugin.pm's clone_image
> always produces qcow2. Also fixed style for neighboring if/else block.
>
> Signed-off-by: Fabian Ebner
> ---
>
> Previous discussion:
> https://pve.proxmox.com/pipermail/pve-deve
On March 19, 2020 10:43 am, Fabian Ebner wrote:
> On 19.03.20 09:01, Fabian Grünbichler wrote:
>> On March 18, 2020 2:02 pm, Fabian Ebner wrote:
>>> Signed-off-by: Fabian Ebner
>>> ---
>>>
>>> For VMs this already happens.
>>>
>>>
On March 18, 2020 2:02 pm, Fabian Ebner wrote:
> Signed-off-by: Fabian Ebner
> ---
>
> For VMs this already happens.
>
> When adding volumes to templates no such conversion to
> base images happens yet (affects both VM/LXC). Because
> templates are more-or-less supposed to be read-only it
> prob
On March 18, 2020 11:57 am, Aaron Lauterer wrote:
>
>
> On 3/17/20 3:33 PM, Fabian Grünbichler wrote:
>> On March 16, 2020 4:44 pm, Aaron Lauterer wrote:
>>> This extracts the logic which guests are to be included in a backup job
>>> into its own method
On March 18, 2020 10:02 am, Stefan Reiter wrote:
> On 17/03/2020 20:56, Mira Limbeck wrote:
>> The reuse of the tunnel, which we're opening to communicate with the target
>> node and to forward the unix socket for the state migration, for the NBD unix
>> socket requires adding support for an array
On March 18, 2020 10:02 am, Stefan Reiter wrote:
> On 17/03/2020 20:56, Mira Limbeck wrote:
>> As the NBD server spawned by qemu can only listen on a single socket,
>> we're dependent on a version being passed to vm_start that indicates
>> which protocol can be used, TCP or Unix, by the source node
On March 10, 2020 2:57 pm, Alexandre DERUMIER wrote:
>>>[ 5] 0.00-10.00 sec 2.58 GBytes 2.22 Gbits/sec 0 sender
>>>[ 5] 0.00-10.00 sec 2.57 GBytes 2.21 Gbits/sec receiver
>>>iperf Done.
>
>>>this is with TLS and our regular AnyEvent API server handling the
>>>connection, with the target being
Content-Transfer-Encoding: 8bit
and add some more details to comments.
Signed-off-by: Fabian Grünbichler
---
PVE/API2/Qemu.pm | 2 +-
PVE/QemuMigrate.pm | 4 ++--
PVE/QemuServer.pm | 14 +-
3 files changed, 12 insertions(+), 8 deletions(-)
diff --git a/PVE/API2/Qemu.pm b/PVE
On March 16, 2020 4:44 pm, Aaron Lauterer wrote:
> Move the logic which volumes are included in the backup job to its own
> method and adapt the VZDump code accordingly. This makes it possible to
> develop other features around backup jobs.
>
> Signed-off-by: Aaron Lauterer
> ---
>
> v2 -> v3: r
On March 16, 2020 4:44 pm, Aaron Lauterer wrote:
> This extracts the logic which guests are to be included in a backup job
> into its own method 'get_included_guests'. This makes it possible to
> develop other features around backup jobs.
>
> Logic which was spread out accross the API2/VZDump.pm f
On March 17, 2020 12:40 pm, Thomas Lamprecht wrote:
> On 3/17/20 11:21 AM, Stefan Reiter wrote:
>>> +$local_volumes->{$opt} = $conf->{${opt}};
>>
>> Does $conf->{${opt}} have too many brackets or is this another arcane perl
>> syntax I've yet to discover? (iow. why not just $conf->{$o
by re-using a dirty bitmap that represents changes since the divergence
of source and target volume. requires a qemu that supports incremental
drive-mirroring, and will die otherwise.
Signed-off-by: Fabian Grünbichler
---
Notes:
v1-v2:
- use newer Qemu patches picked up by me and
with hiding it behind an extra flag for now that we later switch to
default/always on.
qemu:
Fabian Grünbichler (1):
add bitmap drive-mirror patches
...d-support-for-sync-bitmap-mode-never.patch | 443 +++
...-support-for-conditional-and-always-.patch | 83 +
...check-for-bitmap-mode-wi
to make migration logs a bit easier to grasp with a quick glance.
Signed-off-by: Fabian Grünbichler
---
Notes:
unchanged since v1
PVE/QemuMigrate.pm | 11 +++
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 44e4c57
: Fabian Grünbichler
---
Notes:
v1->v2:
- rebased and fixed some conflicts
for Qemu 4.2 you probably want the 'block-job-cancel instead of
blockjob-complete' change recently discussed in response to Mira's UNIX
migration series as well, otherwise shutting down
On March 16, 2020 2:53 pm, Fabian Ebner wrote:
>
> On 16.03.20 12:07, Fabian Grünbichler wrote:
>> On March 12, 2020 1:08 pm, Fabian Ebner wrote:
>>> This is the second half for the previous series [0].
>>>
>>> This series aims to allow offline migration
needs a rebase (please always rebase before sending - the last commit on
src/PVE/VZDump/LXC.pm was done before you sent this patch out!)
On February 27, 2020 11:01 am, Aaron Lauterer wrote:
> Move the logic which mountpoints are included in the backup job to its
> own method and adapt the VZDump
On March 12, 2020 1:08 pm, Fabian Ebner wrote:
> This is the second half for the previous series [0].
>
> This series aims to allow offline migration with '--targetstorage'
> and improve handling unsued/orphaned disks when migrating.
> It also makes it possible to migrate volumes between storages
On March 12, 2020 1:08 pm, Fabian Ebner wrote:
> to guess a valid volname for a targetstorage of a different type.
> This makes it possible to migrate raw volumes between 'dir' and 'lvm'
> storages.
>
> It is only used when the storage type for the source storage X
> and target storage Y differ an
two nits in-line
On March 12, 2020 1:08 pm, Fabian Ebner wrote:
> and also return the ID of the allocated volume. This option
> allows plugins to choose a new name if there is a collision.
>
> In storage_migrate, the API version for the receiving side is checked.
>
> In Storage.pm's volume_impor
On March 12, 2020 1:08 pm, Fabian Ebner wrote:
> This function is intened to be used after doing a migration where some
> of the volume IDs changed.
>
> Signed-off-by: Fabian Ebner
> ---
> PVE/AbstractConfig.pm | 30 ++
> 1 file changed, 30 insertions(+)
>
> diff --g
On March 12, 2020 1:08 pm, Fabian Ebner wrote:
> Signed-off-by: Fabian Ebner
> ---
> PVE/QemuMigrate.pm | 12
> 1 file changed, 8 insertions(+), 4 deletions(-)
>
> diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
> index 44e4c57..464abc6 100644
> --- a/PVE/QemuMigrate.pm
> +++ b
On March 12, 2020 1:08 pm, Fabian Ebner wrote:
> Signed-off-by: Fabian Ebner
> ---
> PVE/CLI/pvesm.pm | 30 ++
> 1 file changed, 30 insertions(+)
>
> diff --git a/PVE/CLI/pvesm.pm b/PVE/CLI/pvesm.pm
> index 510faba..7c0e259 100755
> --- a/PVE/CLI/pvesm.pm
> +++ b/PVE/
On March 12, 2020 1:08 pm, Fabian Ebner wrote:
> Introduce a parameter $opts to allow for better control of which
> keys/volumes to use for the iteration and ability to reverse the order.
> Also, allow extra parameters for the function.
>
> Removes the '__snapshot'-prefix for future use from outsi
rmat}
> : $drivedesc_hash->{$key}->{format};
couldn't we just put the unused schema into $drivedesc_hash as well?
is_valid_drivename would need to skip them[1], but we'd have all the dis
this got broken with PBS integration patches
Signed-off-by: Fabian Grünbichler
---
https://forum.proxmox.com/threads/proxmox-6-1-7-pct-restore-unable-to-parse-volume-id.67076/
src/PVE/LXC/Create.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/PVE/LXC/Create.pm b/src
On March 13, 2020 4:21 pm, Aaron Lauterer wrote:
> Add the `,audiodev=` property to audio devices when machine version is
> 4.2 or higher.
>
> With Qemu 4.2 a new `audiodev` property was introduced [0] to explicitly
> specify the backend to be used for the audio device. This is accompanied
> with
On March 11, 2020 2:26 pm, Fabian Grünbichler wrote:
> On March 11, 2020 11:44 am, Mira Limbeck wrote:
>> The NBD socket on the client side is kept open until the VM stops. In
>> the case of TCP this was not a problem, because we did not rely on it
>> closing. But with unix
On March 11, 2020 11:44 am, Mira Limbeck wrote:
> The reuse of the tunnel, which we're opening to communicate with the target
> node and to forward the unix socket for the state migration, for the NBD unix
> socket requires adding support for an array of sockets to forward, not just a
> single one.
On March 11, 2020 11:44 am, Mira Limbeck wrote:
> The NBD socket on the client side is kept open until the VM stops. In
> the case of TCP this was not a problem, because we did not rely on it
> closing. But with unix socket forwarding via SSH the SSH connection
> can't close if the socket is still
On March 11, 2020 11:44 am, Mira Limbeck wrote:
> As the NBD server spawned by qemu can only listen on a single socket,
> we're dependant on a version being passed to vm_start that indicates
> which protocol can be used (TCP or Unix) by the source node.
>
> The change in socket type (TCP to Unix)
nit: this should be ordered after #2 - first implement without breakage,
then expose new feature. not first expose new-feature as no-op, then
implement new feature ;)
On March 11, 2020 11:44 am, Mira Limbeck wrote:
> For secure live migration with local disks via NBD over a unix socket,
> we hav
On March 11, 2020 8:55 am, Alexandre DERUMIER wrote:
> Hi,
>
> Thinking about cross-cluster migration,
>
> is there any plan to share storage across different cluster?
> It's not uncommon to have multiple cluster as we are currently limited by
> corosync, and a shared storage with differents po
On March 10, 2020 11:28 am, Alexandre DERUMIER wrote:
> Hi,
>
> do have a small poc sample to create a tunnel like socat ?
>
> I would like to bench with iperf.
>
>
> I have some benchmark, on recent server with 3ghz cpu:
>
> nbd migration direct : 3,5gbit/s
>
> iperf through socat tunnel : 3
On March 10, 2020 7:25 am, Thomas Lamprecht wrote:
> On 3/10/20 7:08 AM, Dietmar Maurer wrote:
>>> I like the second option
>>>
>>> "mapping of source storage/bridge to target storage/bridge"
>> ...
>>> Also, it could be great to save mapping for reusing later
>>
>> Maybe in the new remote confi
On March 9, 2020 1:22 pm, Dominik Csapak wrote:
> On 3/9/20 11:45 AM, Fabian Grünbichler wrote:
>> On March 6, 2020 11:05 am, Dominik Csapak wrote:
>>> this api call syncs the users and groups from LDAP/AD to the
>>> user.cfg, by default only users, but can be configured
On March 6, 2020 11:05 am, Dominik Csapak wrote:
> this api call syncs the users and groups from LDAP/AD to the
> user.cfg, by default only users, but can be configured
>
> it also implements a 'prune' mode where we first delete all
> users/groups from the config and sync them again
>
> also add
101 - 200 of 2770 matches
Mail list logo