Re: [pve-devel] [PATCH xtermjs] termproxy: rewrite in rust
> so we have a 'termproxy' crate+binary and a binary package with name > 'pve-xtermjs' This is quite confusing ... ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] Feature Request - HA for KVM Templates?
> How difficult would it be to add functionality to Proxmox, such that > templates on shared storage (or Ceph) could be HA as well, and available > across the cluster? AFAIK this is already implemented. Simply mark the template as HA. Already tried that? ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [Patch qemu] PVE-Backup: qmp_query_backup - improve monitor output
--- monitor/hmp-cmds.c | 45 +--- pve-backup.c | 26 + qapi/block-core.json | 9 +++-- 3 files changed, 63 insertions(+), 17 deletions(-) diff --git a/monitor/hmp-cmds.c b/monitor/hmp-cmds.c index 7fd59b1c22..4f692c15a2 100644 --- a/monitor/hmp-cmds.c +++ b/monitor/hmp-cmds.c @@ -218,19 +218,42 @@ void hmp_info_backup(Monitor *mon, const QDict *qdict) monitor_printf(mon, "End time: %s", ctime(>end_time)); } -int per = (info->has_total && info->total && -info->has_transferred && info->transferred) ? -(info->transferred * 100)/info->total : 0; -int zero_per = (info->has_total && info->total && -info->has_zero_bytes && info->zero_bytes) ? -(info->zero_bytes * 100)/info->total : 0; monitor_printf(mon, "Backup file: %s\n", info->backup_file); monitor_printf(mon, "Backup uuid: %s\n", info->uuid); -monitor_printf(mon, "Total size: %zd\n", info->total); -monitor_printf(mon, "Transferred bytes: %zd (%d%%)\n", - info->transferred, per); -monitor_printf(mon, "Zero bytes: %zd (%d%%)\n", - info->zero_bytes, zero_per); + +if (!(info->has_total && info->total)) { +// this should not happen normally +monitor_printf(mon, "Total size: %d\n", 0); +} else { +bool incremental = false; +size_t total_or_dirty = info->total; +if (info->has_transferred) { +if (info->has_dirty && info->dirty) { + if (info->dirty < info->total) { +total_or_dirty = info->dirty; +incremental = true; +} +} +} + +int per = (info->transferred * 100)/total_or_dirty; + +monitor_printf(mon, "Backup mode: %s\n", incremental ? "incremental" : "full"); + +int zero_per = (info->has_zero_bytes && info->zero_bytes) ? +(info->zero_bytes * 100)/info->total : 0; +monitor_printf(mon, "Total size: %zd\n", info->total); +monitor_printf(mon, "Transferred bytes: %zd (%d%%)\n", + info->transferred, per); +monitor_printf(mon, "Zero bytes: %zd (%d%%)\n", + info->zero_bytes, zero_per); + +if (info->has_reused) { +int reused_per = (info->reused * 100)/total_or_dirty; +monitor_printf(mon, "Reused bytes: %zd (%d%%)\n", + info->reused, reused_per); +} +} } qapi_free_BackupStatus(info); diff --git a/pve-backup.c b/pve-backup.c index 1c4f6cf9e0..3a71270213 100644 --- a/pve-backup.c +++ b/pve-backup.c @@ -41,7 +41,9 @@ static struct PVEBackupState { uuid_t uuid; char uuid_str[37]; size_t total; +size_t dirty; size_t transferred; +size_t reused; size_t zero_bytes; } stat; int64_t speed; @@ -108,11 +110,12 @@ static bool pvebackup_error_or_canceled(void) return error_or_canceled; } -static void pvebackup_add_transfered_bytes(size_t transferred, size_t zero_bytes) +static void pvebackup_add_transfered_bytes(size_t transferred, size_t zero_bytes, size_t reused) { qemu_mutex_lock(_state.stat.lock); backup_state.stat.zero_bytes += zero_bytes; backup_state.stat.transferred += transferred; +backup_state.stat.reused += reused; qemu_mutex_unlock(_state.stat.lock); } @@ -151,7 +154,8 @@ pvebackup_co_dump_pbs_cb( pvebackup_propagate_error(local_err); return pbs_res; } else { -pvebackup_add_transfered_bytes(size, !buf ? size : 0); +size_t reused = (pbs_res == 0) ? size : 0; +pvebackup_add_transfered_bytes(size, !buf ? size : 0, reused); } return size; @@ -211,11 +215,11 @@ pvebackup_co_dump_vma_cb( } else { if (remaining >= VMA_CLUSTER_SIZE) { assert(ret == VMA_CLUSTER_SIZE); -pvebackup_add_transfered_bytes(VMA_CLUSTER_SIZE, zero_bytes); +pvebackup_add_transfered_bytes(VMA_CLUSTER_SIZE, zero_bytes, 0); remaining -= VMA_CLUSTER_SIZE; } else { assert(ret == remaining); -pvebackup_add_transfered_bytes(remaining, zero_bytes); +pvebackup_add_transfered_bytes(remaining, zero_bytes, 0); remaining = 0; } } @@ -650,6 +654,7 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque) } size_t total = 0; +size_t dirty = 0; l = di_list; while (l) { @@ -731,11 +736,14 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque) } /* mark entire bitmap as dirty to make
[pve-devel] [PATCH qemu] PVE-Backup: remove dirty-bitmap in pvebackup_complete_cb for failed jobs
Note: We remove the device from di_list, so pvebackup_co_cleanup does not handle this case. --- pve-backup.c | 6 ++ 1 file changed, 6 insertions(+) diff --git a/pve-backup.c b/pve-backup.c index 61a8b4d2a4..1c4f6cf9e0 100644 --- a/pve-backup.c +++ b/pve-backup.c @@ -318,6 +318,12 @@ static void pvebackup_complete_cb(void *opaque, int ret) // remove self from job queue backup_state.di_list = g_list_remove(backup_state.di_list, di); +if (di->bitmap && ret < 0) { +// on error or cancel we cannot ensure synchronization of dirty +// bitmaps with backup server, so remove all and do full backup next +bdrv_release_dirty_bitmap(di->bitmap); +} + g_free(di); qemu_mutex_unlock(_state.backup_mutex); -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [RFC pve-qemu] Add systemd journal logging patch
comments inline > On 06/30/2020 2:06 PM Stefan Reiter wrote: > > > Prints QEMU errors that occur *after* the "-daemonize" fork to the > systemd journal, instead of pushing them into /dev/null like before. > > Signed-off-by: Stefan Reiter > --- > > Useful for debugging rust panics for example. I'm sure there's other ways to > go > about this (log files? pass the journal fd from outside? pipe it into the > journal somehow?) but this one seems simple enough, though it of course > requires > linking QEMU against libsystemd. > > @dietmar: is this similar to what you had in mind? > > debian/control| 1 + > ...ct-stderr-to-journal-when-daemonized.patch | 50 +++ > debian/patches/series | 1 + > 3 files changed, 52 insertions(+) > create mode 100644 > debian/patches/pve/0052-PVE-redirect-stderr-to-journal-when-daemonized.patch > > diff --git a/debian/control b/debian/control > index caceabb..e6d935d 100644 > --- a/debian/control > +++ b/debian/control > @@ -25,6 +25,7 @@ Build-Depends: autotools-dev, > libseccomp-dev, > libspice-protocol-dev (>= 0.12.14~), > libspice-server-dev (>= 0.14.0~), > + libsystemd-dev, > libusb-1.0-0-dev (>= 1.0.17-1), > libusbredirparser-dev (>= 0.6-2), > python3-minimal, > diff --git > a/debian/patches/pve/0052-PVE-redirect-stderr-to-journal-when-daemonized.patch > > b/debian/patches/pve/0052-PVE-redirect-stderr-to-journal-when-daemonized.patch > new file mode 100644 > index 000..f73de53 > --- /dev/null > +++ > b/debian/patches/pve/0052-PVE-redirect-stderr-to-journal-when-daemonized.patch > @@ -0,0 +1,50 @@ > +From Mon Sep 17 00:00:00 2001 > +From: Stefan Reiter > +Date: Tue, 30 Jun 2020 13:10:10 +0200 > +Subject: [PATCH] PVE: redirect stderr to journal when daemonized > + > +QEMU uses the logging for error messages usually, so LOG_ERR is most > +fitting. > +--- > + Makefile.objs | 1 + > + os-posix.c| 7 +-- > + 2 files changed, 6 insertions(+), 2 deletions(-) > + > +diff --git a/Makefile.objs b/Makefile.objs > +index b7d58e592e..105f23bff7 100644 > +--- a/Makefile.objs > b/Makefile.objs > +@@ -55,6 +55,7 @@ common-obj-y += net/ > + common-obj-y += qdev-monitor.o > + common-obj-$(CONFIG_WIN32) += os-win32.o > + common-obj-$(CONFIG_POSIX) += os-posix.o > ++os-posix.o-libs := -lsystemd > + > + common-obj-$(CONFIG_LINUX) += fsdev/ > + > +diff --git a/os-posix.c b/os-posix.c > +index 3cd52e1e70..ab4d052c62 100644 > +--- a/os-posix.c > b/os-posix.c > +@@ -28,6 +28,8 @@ > + #include > + #include > + #include > ++#include > ++#include > + > + #include "qemu-common.h" > + /* Needed early for CONFIG_BSD etc. */ > +@@ -309,9 +311,10 @@ void os_setup_post(void) > + > + dup2(fd, 0); > + dup2(fd, 1); I guess we also want to redirect stdout. Or does that produce too much noise? > +-/* In case -D is given do not redirect stderr to /dev/null */ > ++/* In case -D is given do not redirect stderr to journal */ > + if (!qemu_logfile) { > +-dup2(fd, 2); > ++int journal_fd = sd_journal_stream_fd("QEMU", LOG_ERR, 0); > ++dup2(journal_fd, 2); > + } > + > + close(fd); > diff --git a/debian/patches/series b/debian/patches/series > index 5d6a5d6..e658c1a 100644 > --- a/debian/patches/series > +++ b/debian/patches/series > @@ -50,3 +50,4 @@ pve/0048-savevm-async-add-debug-timing-prints.patch > pve/0049-Add-some-qemu_vfree-statements-to-prevent-memory-lea.patch > pve/0050-Fix-backup-for-not-64k-aligned-storages.patch > pve/0051-PVE-Backup-Add-dirty-bitmap-tracking-for-incremental.patch > +pve/0052-PVE-redirect-stderr-to-journal-when-daemonized.patch > -- > 2.20.1 > > > ___ > pve-devel mailing list > pve-devel@pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH qemu-server] avoid backup command timeout with pbs
--- PVE/VZDump/QemuServer.pm | 2 ++ 1 file changed, 2 insertions(+) diff --git a/PVE/VZDump/QemuServer.pm b/PVE/VZDump/QemuServer.pm index 1a0d437..147a3e6 100644 --- a/PVE/VZDump/QemuServer.pm +++ b/PVE/VZDump/QemuServer.pm @@ -403,6 +403,8 @@ sub archive_pbs { $params->{fingerprint} = $fingerprint if defined($fingerprint); $params->{'firewall-file'} = $firewall if -e $firewall; + $params->{timeout} = 60; # give some time to connect to the backup server + my $res = eval { mon_cmd($vmid, "backup", %$params) }; my $qmperr = $@; $backup_job_uuid = $res->{UUID} if $res; -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] cloudinit: generate server ssh keys on proxmox side ?
> Maybe could we generate them once at proxmox side ? -1 Copying private keys is bad ... ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] RFC: sdn: add ip management (IPAM -DHCP) ideas
comments inline > When user will create a new vm or add a nic to the vm, he could choose ip > address "auto", > and the next available ip addresse will be returned with the ipam driver. Each NIC may have an associated network allocation pool, where "auto" tries to figure out the correct pool autimagically. > User could also choose a specific ip address with verification of > availability. I though this is addition to the network allocation pool. If set, it tries to allocate a specific IP address inside the allocation pool. > In second step, we could also add dhcp server features, with static ip/mac > leases. (Kea dhcp seem a good candidate). > with 1 local dhcp server by node. (only responding to local vms) > for bgp-evpn it's easy because we already have a anycast gateway ip, so it > can be use by dhcp server. > for vlan && layer2 plugin, I wonder if we could also assign some kind of > anycast ip (same ip on each host/vnet), but with filtering > (iptables,ebtables,) > I could also works to implement cloudinit network metadata. I would prefer to delegate that part to the VM (cloudinit). Also, I like the idea that IPAM has a plugin architecture. So it is up to the plugin to provide a dhcp service? > Here some implementations doc in openstack && openebula Thanks for the links! > Somes notes/ideas for the implementation/config: > -- > /etc/pve/sdn/subnets.cfg > - > > subnet: subnet1 > cidr 192.168.0.0/24 > allocation-pools 192.168.0.10-17, 192.168.0.70-10, 192.168.0.100 > (default is the full cidr without network/broadcast address) I thought IP addresses should be managed by the IPAM plugin? Why would we specify them here? ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] bug in backup/restore for non-64K aligned disks + host mem leak to backups
May I request that you open a bug at: https://bugzilla.proxmox.com That way it is easier to track the bug status ... > On 06/18/2020 3:22 PM Dietmar Maurer wrote: > > > > Don't get me wrong, I just wanted to report what IMO is a bug in a > > backup tool (an actually pretty clever one looking at the format), and > > that to me is the point of view that really matters here. > > Sure, this is clearly a bug a we should fix it - thanks for reporting. > But AFAIK this does not trigger normally, so this is low priority. > > patches welcome... ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] bug in backup/restore for non-64K aligned disks + host mem leak to backups
> Don't get me wrong, I just wanted to report what IMO is a bug in a > backup tool (an actually pretty clever one looking at the format), and > that to me is the point of view that really matters here. Sure, this is clearly a bug a we should fix it - thanks for reporting. But AFAIK this does not trigger normally, so this is low priority. patches welcome... ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] bug in backup/restore for non-64K aligned disks + host mem leak to backups
> I initially noticed that backup/restore has a bug when the disk size is > not a multiple of 64K for DRBD devices. But qemu block size is 64K, so disk size should/must always be a multiple of 64K? AFAIK we never generate such disk sizes? ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH storage/manager] create zpools with stable disk paths
Why do we handle this at the GUI side? I would prefer to do that on the host instead... ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH container] lxc: fall back to 'unmanaged' on unknown ostype
> Rather than failing, it would be nice to fall back to 'unmanaged' when > the ostype cannot be determined/found. Why exactly would that be nice? FWICT it would start the container with wrong and unexpected setup. ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH storage v4 02/12] storage: replace build-in stat with File::stat
> On April 22, 2020 6:00 PM Alwin Antreich wrote: > > > On Wed, Apr 22, 2020 at 05:35:05PM +0200, Dietmar Maurer wrote: > > AFAIK this can have ugly side effects ... > Okay, I was not aware of any know side effects. > > I took the File::stat, since we use it already in pve-cluster, > qemu-server, pve-common, ... . And a off-list discussion with Thomas and > Fabian G. > > If there is a better solution, I am happy to work on it. # grep -r "use File::stat" /usr/share/perl5/PVE/ /usr/share/perl5/PVE/QemuServer/Helpers.pm:use File::stat; /usr/share/perl5/PVE/Storage/ISCSIPlugin.pm:use File::stat; /usr/share/perl5/PVE/APIServer/AnyEvent.pm:use File::stat qw(); /usr/share/perl5/PVE/AccessControl.pm:use File::stat; /usr/share/perl5/PVE/Cluster.pm:use File::stat qw(); /usr/share/perl5/PVE/LXC/Setup/Base.pm:use File::stat; /usr/share/perl5/PVE/QemuServer.pm:use File::stat; /usr/share/perl5/PVE/INotify.pm:use File::stat; /usr/share/perl5/PVE/API2/APT.pm:use File::stat (); So I would use: use File::stat qw(); to avoid override the core stat() and lstat() functions. ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH storage v4 02/12] storage: replace build-in stat with File::stat
AFAIK this can have ugly side effects ... > On April 22, 2020 4:57 PM Alwin Antreich wrote: > > > to minimize variable declarations. And allow to mock this method in > tests instead of the perl build-in stat. > > Signed-off-by: Alwin Antreich > --- > PVE/Diskmanage.pm | 9 + > PVE/Storage/Plugin.pm | 34 ++ > 2 files changed, 15 insertions(+), 28 deletions(-) > > diff --git a/PVE/Diskmanage.pm b/PVE/Diskmanage.pm > index 13e7cd8..cac944d 100644 > --- a/PVE/Diskmanage.pm > +++ b/PVE/Diskmanage.pm > @@ -6,6 +6,7 @@ use PVE::ProcFSTools; > use Data::Dumper; > use Cwd qw(abs_path); > use Fcntl ':mode'; > +use File::stat; > use JSON; > > use PVE::Tools qw(extract_param run_command file_get_contents > file_read_firstline dir_glob_regex dir_glob_foreach trim); > @@ -673,11 +674,11 @@ sub get_disks { > sub get_partnum { > my ($part_path) = @_; > > -my ($mode, $rdev) = (stat($part_path))[2,6]; > +my $st = stat($part_path); > > -next if !$mode || !S_ISBLK($mode) || !$rdev; > -my $major = PVE::Tools::dev_t_major($rdev); > -my $minor = PVE::Tools::dev_t_minor($rdev); > +next if !$st->mode || !S_ISBLK($st->mode) || !$st->rdev; > +my $major = PVE::Tools::dev_t_major($st->rdev); > +my $minor = PVE::Tools::dev_t_minor($st->rdev); > my $partnum_path = "/sys/dev/block/$major:$minor/"; > > my $partnum; > diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm > index 4489a77..d2dfad6 100644 > --- a/PVE/Storage/Plugin.pm > +++ b/PVE/Storage/Plugin.pm > @@ -7,6 +7,7 @@ use Fcntl ':mode'; > use File::chdir; > use File::Path; > use File::Basename; > +use File::stat; > use Time::Local qw(timelocal); > > use PVE::Tools qw(run_command); > @@ -718,12 +719,10 @@ sub free_image { > sub file_size_info { > my ($filename, $timeout) = @_; > > -my @fs = stat($filename); > -my $mode = $fs[2]; > -my $ctime = $fs[10]; > +my $st = stat($filename); > > -if (S_ISDIR($mode)) { > - return wantarray ? (0, 'subvol', 0, undef, $ctime) : 1; > +if (S_ISDIR($st->mode)) { > + return wantarray ? (0, 'subvol', 0, undef, $st->ctime) : 1; > } > > my $json = ''; > @@ -741,7 +740,7 @@ sub file_size_info { > > my ($size, $format, $used, $parent) = $info->@{qw(virtual-size format > actual-size backing-filename)}; > > -return wantarray ? ($size, $format, $used, $parent, $ctime) : $size; > +return wantarray ? ($size, $format, $used, $parent, $st->ctime) : $size; > } > > sub volume_size_info { > @@ -918,22 +917,9 @@ my $get_subdir_files = sub { > > foreach my $fn (<$path/*>) { > > - my ($dev, > - $ino, > - $mode, > - $nlink, > - $uid, > - $gid, > - $rdev, > - $size, > - $atime, > - $mtime, > - $ctime, > - $blksize, > - $blocks > - ) = stat($fn); > - > - next if S_ISDIR($mode); > + my $st = stat($fn); > + > + next if S_ISDIR($st->mode); > > my $info; > > @@ -972,8 +958,8 @@ my $get_subdir_files = sub { > }; > } > > - $info->{size} = $size; > - $info->{ctime} //= $ctime; > + $info->{size} = $st->size; > + $info->{ctime} //= $st->ctime; > > push @$res, $info; > } > -- > 2.20.1 > > > ___ > pve-devel mailing list > pve-devel@pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] Fwd: proxmox-backup dev env
> Hi I downloaded and did the proxmox-backup cargo build, but now when I try > to build make dinstall it fails because it can't find the crate on the > registry, how did you solve this problem you have an internal cargo > registry ? I tried cargo local registry and also cargo vendor but I can't > get around the problem. We have a Debian repository with all rust development packages required: deb http://download.proxmox.com/debian/devel buster main We build everything using those rust debian packages. The registry for debian packaged rust is /usr/share/cargo/registry/ ... ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] learning a new language: rust vs golang vs raku ?
> I think I'm going to learn a new language. > > What do you think about rust vs golang vs raku, coming from perl/python/php. > (I don't have touch C too much since school in 99 ;) > > > I'm seeing that some rust code is available proxmox git repo, We decided to use RUST for future projects, because we can write better code using this language. > is if fast to learn ? Not really, but it is worth a try. Most people love it... ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [zsync] fix: check for incremental sync snapshot.
Why does the patch ignore the output from the command? > On March 18, 2020 7:51 AM Wolfgang Link wrote: > > > For an incremental sync you need the last_snap on both sides. > --- > pve-zsync | 13 - > 1 file changed, 4 insertions(+), 9 deletions(-) > > diff --git a/pve-zsync b/pve-zsync > index ea3178e..893baf0 100755 > --- a/pve-zsync > +++ b/pve-zsync > @@ -931,6 +931,7 @@ sub snapshot_destroy { > } > } > > +# check if snapshot for incremental sync exist on dest side > sub snapshot_exist { > my ($source , $dest, $method, $dest_user) = @_; > > @@ -940,22 +941,16 @@ sub snapshot_exist { > > my $path = $dest->{all}; > $path .= "/$source->{last_part}" if $source->{last_part}; > -$path .= "\@$source->{old_snap}"; > +$path .= "\@$source->{last_snap}"; > > push @$cmd, $path; > > - > -my $text = ""; > -eval {$text =run_cmd($cmd);}; > +eval {run_cmd($cmd)}; > if (my $erro =$@) { > warn "WARN: $erro"; > return undef; > } > - > -while ($text && $text =~ s/^(.*?)(\n|$)//) { > - my $line =$1; > - return 1 if $line =~ m/^.*$source->{old_snap}$/; > -} > +return 1; > } > > sub send_image { > -- > 2.20.1 > > > ___ > pve-devel mailing list > pve-devel@pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] Proxmox Backup Server
> What is the roadmap for this tool? It will be announced here as soon as we have detailed plans. > What is then features it's supposed to support to? > Just curious! As already stated, sources are available at git.proxmox.com. ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] Proxmox Backup Server
> I've seen quite a few mentions of Proxmox Backup Server recently (in commit > messages). Is there some doc written about it somewhere ? What kind of > features will it provide etc. ? This is still experimental code, so we only have the source code available: https://git.proxmox.com/ All backup related code is written in RUST, so check the RUST section. ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH qemu-server 4/4] implement PoC migration to remote cluster/node
> do we need a stored mapping for intra-cluster migration as well? we > could put the same entities into the node config as well, and the > order of precedence would be: IMHO this is not necessary ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH qemu-server 4/4] implement PoC migration to remote cluster/node
> I like the second option > > "mapping of source storage/bridge to target storage/bridge" ... > Also, it could be great to save mapping for reusing later Maybe in the new remote configuration (@fabian)? ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH 24/31] PVE: move snapshot cleanup into bottom half
> for the record, this could be squashed into "[PATCH 17/31] PVE: internal > snapshot async" no biggie, but if we already go for a cleanup round.. In the first step, I only merged backup related patches (keep the rest as they was). Hoped this makes review easier ... Once that is finished, I can try to cleanup the rest ... ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH 31/31] PVE-Backup - proxmox backup patches for qemu
--- blockdev.c| 823 ++ hmp-commands-info.hx | 13 + hmp-commands.hx | 31 ++ include/monitor/hmp.h | 3 + monitor/hmp-cmds.c| 69 qapi/block-core.json | 91 + qapi/common.json | 13 + qapi/misc.json| 13 - 8 files changed, 1043 insertions(+), 13 deletions(-) diff --git a/blockdev.c b/blockdev.c index c7fa663ebf..dac9554a3e 100644 --- a/blockdev.c +++ b/blockdev.c @@ -36,6 +36,7 @@ #include "hw/block/block.h" #include "block/blockjob.h" #include "block/qdict.h" +#include "block/blockjob_int.h" #include "block/throttle-groups.h" #include "monitor/monitor.h" #include "qemu/error-report.h" @@ -45,6 +46,7 @@ #include "qapi/qapi-commands-block.h" #include "qapi/qapi-commands-transaction.h" #include "qapi/qapi-visit-block-core.h" +#include "qapi/qapi-types-misc.h" #include "qapi/qmp/qdict.h" #include "qapi/qmp/qnum.h" #include "qapi/qmp/qstring.h" @@ -63,6 +65,7 @@ #include "qemu/help_option.h" #include "qemu/main-loop.h" #include "qemu/throttle-options.h" +#include "vma.h" static QTAILQ_HEAD(, BlockDriverState) monitor_bdrv_states = QTAILQ_HEAD_INITIALIZER(monitor_bdrv_states); @@ -3212,6 +3215,826 @@ out: aio_context_release(aio_context); } +/* PVE backup related function */ + +typedef struct BlockOnCoroutineWrapper { +AioContext *ctx; +CoroutineEntry *entry; +void *entry_arg; +bool finished; +} BlockOnCoroutineWrapper; + +static void coroutine_fn block_on_coroutine_wrapper(void *opaque) +{ +BlockOnCoroutineWrapper *wrapper = opaque; +wrapper->entry(wrapper->entry_arg); +wrapper->finished = true; +aio_wait_kick(); +} + +static void block_on_coroutine_fn(CoroutineEntry *entry, void *entry_arg) +{ +assert(!qemu_in_coroutine()); + +AioContext *ctx = qemu_get_current_aio_context(); +BlockOnCoroutineWrapper wrapper = { +.finished = false, +.entry = entry, +.entry_arg = entry_arg, +.ctx = ctx, +}; +Coroutine *wrapper_co = qemu_coroutine_create(block_on_coroutine_wrapper, ); +aio_co_enter(ctx, wrapper_co); +AIO_WAIT_WHILE(ctx, !wrapper.finished); +} + +static struct PVEBackupState { +struct { +// Everithing accessed from qmp command, protected using rwlock +CoRwlock rwlock; +Error *error; +time_t start_time; +time_t end_time; +char *backup_file; +uuid_t uuid; +char uuid_str[37]; +size_t total; +size_t transferred; +size_t zero_bytes; +bool cancel; +} stat; +int64_t speed; +VmaWriter *vmaw; +GList *di_list; +CoMutex backup_mutex; +bool mutex_initialized; +} backup_state; + +typedef struct PVEBackupDevInfo { +BlockDriverState *bs; +size_t size; +uint8_t dev_id; +bool completed; +char targetfile[PATH_MAX]; +BlockDriverState *target; +} PVEBackupDevInfo; + +static void pvebackup_co_run_next_job(void); + +static int coroutine_fn pvebackup_co_dump_cb(void *opaque, + uint64_t start, uint64_t bytes, + const void *pbuf) +{ +assert(qemu_in_coroutine()); + +const uint64_t size = bytes; +const unsigned char *buf = pbuf; +PVEBackupDevInfo *di = opaque; + +qemu_co_rwlock_rdlock(_state.stat.rwlock); +bool cancel = backup_state.stat.cancel; +qemu_co_rwlock_unlock(_state.stat.rwlock); + +if (cancel) { +return size; // return success +} + +qemu_co_mutex_lock(_state.backup_mutex); + +uint64_t cluster_num = start / VMA_CLUSTER_SIZE; +if ((cluster_num * VMA_CLUSTER_SIZE) != start) { +qemu_co_rwlock_rdlock(_state.stat.rwlock); +if (!backup_state.stat.error) { +qemu_co_rwlock_upgrade(_state.stat.rwlock); +error_setg(_state.stat.error, + "got unaligned write inside backup dump " + "callback (sector %ld)", start); +} +qemu_co_rwlock_unlock(_state.stat.rwlock); +qemu_co_mutex_unlock(_state.backup_mutex); +return -1; // not aligned to cluster size +} + +int ret = -1; + +if (backup_state.vmaw) { +size_t zero_bytes = 0; +uint64_t remaining = size; +while (remaining > 0) { +ret = vma_writer_write(backup_state.vmaw, di->dev_id, cluster_num, + buf, _bytes); +++cluster_num; +if (buf) { +buf += VMA_CLUSTER_SIZE; +} +if (ret < 0) { +qemu_co_rwlock_rdlock(_state.stat.rwlock); +if (!backup_state.stat.error) { +qemu_co_rwlock_upgrade(_state.stat.rwlock); +vma_writer_error_propagate(backup_state.vmaw, _state.stat.error); +} +qemu_co_rwlock_unlock(_state.stat.rwlock); + +
[pve-devel] [PATCH 30/31] PVE-Backup: add backup-dump block driver
- add backup-dump block driver block/backup-dump.c - move BackupBlockJob declaration from block/backup.c to include/block/block_int.h - block/backup.c - backup-job-create: also consider source cluster size - block/io.c - bdrv_do_drained_begin_quiesce: check for coroutine - job.c: make job_should_pause non-static --- block/Makefile.objs | 1 + block/backup-dump.c | 170 ++ block/backup.c| 23 ++ block/io.c| 8 +- include/block/block_int.h | 30 +++ job.c | 3 +- 6 files changed, 215 insertions(+), 20 deletions(-) create mode 100644 block/backup-dump.c diff --git a/block/Makefile.objs b/block/Makefile.objs index a10ceabf5b..5cd9e40d8d 100644 --- a/block/Makefile.objs +++ b/block/Makefile.objs @@ -33,6 +33,7 @@ block-obj-$(CONFIG_RBD) += rbd.o block-obj-$(CONFIG_GLUSTERFS) += gluster.o block-obj-$(CONFIG_VXHS) += vxhs.o block-obj-$(CONFIG_LIBSSH) += ssh.o +block-obj-y += backup-dump.o block-obj-y += accounting.o dirty-bitmap.o block-obj-y += write-threshold.o block-obj-y += backup.o diff --git a/block/backup-dump.c b/block/backup-dump.c new file mode 100644 index 00..ff4ecd4557 --- /dev/null +++ b/block/backup-dump.c @@ -0,0 +1,170 @@ +/* + * BlockDriver to send backup data stream to a callback function + * + * Copyright (C) 2020 Proxmox Server Solutions GmbH + * + * This work is licensed under the terms of the GNU GPL, version 2 or later. + * See the COPYING file in the top-level directory. + * + */ + +#include "qemu/osdep.h" +#include "qemu-common.h" +#include "qom/object_interfaces.h" +#include "block/block_int.h" + +typedef struct { +int dump_cb_block_size; +uint64_tbyte_size; +BackupDumpFunc *dump_cb; +void *dump_cb_data; +} BDRVBackupDumpState; + +static int qemu_backup_dump_get_info(BlockDriverState *bs, BlockDriverInfo *bdi) +{ +BDRVBackupDumpState *s = bs->opaque; + +bdi->cluster_size = s->dump_cb_block_size; +bdi->unallocated_blocks_are_zero = true; +return 0; +} + +static int qemu_backup_dump_check_perm( +BlockDriverState *bs, +uint64_t perm, +uint64_t shared, +Error **errp) +{ +/* Nothing to do. */ +return 0; +} + +static void qemu_backup_dump_set_perm( +BlockDriverState *bs, +uint64_t perm, +uint64_t shared) +{ +/* Nothing to do. */ +} + +static void qemu_backup_dump_abort_perm_update(BlockDriverState *bs) +{ +/* Nothing to do. */ +} + +static void qemu_backup_dump_refresh_limits(BlockDriverState *bs, Error **errp) +{ +bs->bl.request_alignment = BDRV_SECTOR_SIZE; /* No sub-sector I/O */ +} + +static void qemu_backup_dump_close(BlockDriverState *bs) +{ +/* Nothing to do. */ +} + +static int64_t qemu_backup_dump_getlength(BlockDriverState *bs) +{ +BDRVBackupDumpState *s = bs->opaque; + +return s->byte_size; +} + +static coroutine_fn int qemu_backup_dump_co_writev( +BlockDriverState *bs, +int64_t sector_num, +int nb_sectors, +QEMUIOVector *qiov, +int flags) +{ +/* flags can be only values we set in supported_write_flags */ +assert(flags == 0); + +BDRVBackupDumpState *s = bs->opaque; +off_t offset = sector_num * BDRV_SECTOR_SIZE; + +uint64_t written = 0; + +for (int i = 0; i < qiov->niov; ++i) { +const struct iovec *v = >iov[i]; + +int rc = s->dump_cb(s->dump_cb_data, offset, v->iov_len, v->iov_base); +if (rc < 0) { +return rc; +} + +if (rc != v->iov_len) { +return -EIO; +} + +written += v->iov_len; +offset += v->iov_len; +} + +return written; +} + +static void qemu_backup_dump_child_perm( +BlockDriverState *bs, +BdrvChild *c, +const BdrvChildRole *role, +BlockReopenQueue *reopen_queue, +uint64_t perm, uint64_t shared, +uint64_t *nperm, uint64_t *nshared) +{ +*nperm = BLK_PERM_ALL; +*nshared = BLK_PERM_ALL; +} + +static BlockDriver bdrv_backup_dump_drive = { +.format_name = "backup-dump-drive", +.protocol_name= "backup-dump", +.instance_size= sizeof(BDRVBackupDumpState), + +.bdrv_close = qemu_backup_dump_close, +.bdrv_has_zero_init = bdrv_has_zero_init_1, +.bdrv_getlength = qemu_backup_dump_getlength, +.bdrv_get_info= qemu_backup_dump_get_info, + +.bdrv_co_writev = qemu_backup_dump_co_writev, + +.bdrv_refresh_limits = qemu_backup_dump_refresh_limits, +.bdrv_check_perm = qemu_backup_dump_check_perm, +.bdrv_set_perm= qemu_backup_dump_set_perm, +.bdrv_abort_perm_update = qemu_backup_dump_abort_perm_update, +.bdrv_child_perm = qemu_backup_dump_child_perm, +}; + +static void bdrv_backup_dump_init(void) +{ +bdrv_register(_backup_dump_drive); +} +
[pve-devel] [PATCH 16/31] PVE: qapi: modify spice query
From: Wolfgang Bumiller Provide the last ticket in the SpiceInfo struct optionally. Signed-off-by: Thomas Lamprecht --- qapi/ui.json| 3 +++ ui/spice-core.c | 5 + 2 files changed, 8 insertions(+) diff --git a/qapi/ui.json b/qapi/ui.json index e04525d8b4..6127990e23 100644 --- a/qapi/ui.json +++ b/qapi/ui.json @@ -211,11 +211,14 @@ # # @channels: a list of @SpiceChannel for each active spice channel # +# @ticket: The last ticket set with set_password +# # Since: 0.14.0 ## { 'struct': 'SpiceInfo', 'data': {'enabled': 'bool', 'migrated': 'bool', '*host': 'str', '*port': 'int', '*tls-port': 'int', '*auth': 'str', '*compiled-version': 'str', + '*ticket': 'str', 'mouse-mode': 'SpiceQueryMouseMode', '*channels': ['SpiceChannel']}, 'if': 'defined(CONFIG_SPICE)' } diff --git a/ui/spice-core.c b/ui/spice-core.c index ca04965ead..243466c13d 100644 --- a/ui/spice-core.c +++ b/ui/spice-core.c @@ -539,6 +539,11 @@ SpiceInfo *qmp_query_spice(Error **errp) micro = SPICE_SERVER_VERSION & 0xff; info->compiled_version = g_strdup_printf("%d.%d.%d", major, minor, micro); +if (auth_passwd) { +info->has_ticket = true; +info->ticket = g_strdup(auth_passwd); +} + if (port) { info->has_port = true; info->port = port; -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH 19/31] PVE: backup: modify job api
From: Wolfgang Bumiller Introduce a pause_count parameter to start a backup in paused mode. This way backups of multiple drives can be started up sequentially via the completion callback while having been started at the same point in time. Signed-off-by: Thomas Lamprecht --- block/backup.c| 3 +++ block/replication.c | 2 +- blockdev.c| 3 ++- include/block/block_int.h | 1 + job.c | 2 +- 5 files changed, 8 insertions(+), 3 deletions(-) diff --git a/block/backup.c b/block/backup.c index cf62b1a38c..c155081de2 100644 --- a/block/backup.c +++ b/block/backup.c @@ -347,6 +347,7 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs, BlockdevOnError on_target_error, int creation_flags, BlockCompletionFunc *cb, void *opaque, + int pause_count, JobTxn *txn, Error **errp) { int64_t len; @@ -468,6 +469,8 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs, block_job_add_bdrv(>common, "target", target, 0, BLK_PERM_ALL, _abort); +job->common.job.pause_count += pause_count; + return >common; error: diff --git a/block/replication.c b/block/replication.c index 99532ce521..ec8de7b427 100644 --- a/block/replication.c +++ b/block/replication.c @@ -546,7 +546,7 @@ static void replication_start(ReplicationState *rs, ReplicationMode mode, 0, MIRROR_SYNC_MODE_NONE, NULL, 0, false, NULL, BLOCKDEV_ON_ERROR_REPORT, BLOCKDEV_ON_ERROR_REPORT, JOB_INTERNAL, -backup_job_completed, bs, NULL, _err); +backup_job_completed, bs, 0, NULL, _err); if (local_err) { error_propagate(errp, local_err); backup_job_cleanup(bs); diff --git a/blockdev.c b/blockdev.c index 8e029e9c01..c7fa663ebf 100644 --- a/blockdev.c +++ b/blockdev.c @@ -3583,7 +3583,8 @@ static BlockJob *do_backup_common(BackupCommon *backup, backup->filter_node_name, backup->on_source_error, backup->on_target_error, -job_flags, NULL, NULL, txn, errp); +job_flags, NULL, NULL, 0, txn, errp); + return job; } diff --git a/include/block/block_int.h b/include/block/block_int.h index dd033d0b37..b0d5eb9485 100644 --- a/include/block/block_int.h +++ b/include/block/block_int.h @@ -1215,6 +1215,7 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs, BlockdevOnError on_target_error, int creation_flags, BlockCompletionFunc *cb, void *opaque, +int pause_count, JobTxn *txn, Error **errp); void hmp_drive_add_node(Monitor *mon, const char *optstr); diff --git a/job.c b/job.c index 04409b40aa..7554f735e3 100644 --- a/job.c +++ b/job.c @@ -888,7 +888,7 @@ void job_start(Job *job) job->co = qemu_coroutine_create(job_co_entry, job); job->pause_count--; job->busy = true; -job->paused = false; +job->paused = job->pause_count > 0; job_state_transition(job, JOB_STATUS_RUNNING); aio_co_enter(job->aio_context, job->co); } -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH 13/31] PVE: [Up] qemu-img dd : add -n skip_create
From: Alexandre Derumier Signed-off-by: Thomas Lamprecht --- qemu-img.c | 23 ++- 1 file changed, 14 insertions(+), 9 deletions(-) diff --git a/qemu-img.c b/qemu-img.c index 8da1ea3951..ea3edb4f04 100644 --- a/qemu-img.c +++ b/qemu-img.c @@ -4535,7 +4535,7 @@ static int img_dd(int argc, char **argv) const char *fmt = NULL; int64_t size = 0, readsize = 0; int64_t block_count = 0, out_pos, in_pos; -bool force_share = false; +bool force_share = false, skip_create = false; struct DdInfo dd = { .flags = 0, .count = 0, @@ -4573,7 +4573,7 @@ static int img_dd(int argc, char **argv) { 0, 0, 0, 0 } }; -while ((c = getopt_long(argc, argv, ":hf:O:U", long_options, NULL))) { +while ((c = getopt_long(argc, argv, ":hf:O:U:n", long_options, NULL))) { if (c == EOF) { break; } @@ -4593,6 +4593,9 @@ static int img_dd(int argc, char **argv) case 'h': help(); break; +case 'n': +skip_create = true; +break; case 'U': force_share = true; break; @@ -4733,13 +4736,15 @@ static int img_dd(int argc, char **argv) size - in.bsz * in.offset, _abort); } -ret = bdrv_create(drv, out.filename, opts, _err); -if (ret < 0) { -error_reportf_err(local_err, - "%s: error while creating output image: ", - out.filename); -ret = -1; -goto out; +if (!skip_create) { +ret = bdrv_create(drv, out.filename, opts, _err); +if (ret < 0) { +error_reportf_err(local_err, + "%s: error while creating output image: ", + out.filename); +ret = -1; +goto out; +} } /* TODO, we can't honour --image-opts for the target, -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH 18/31] PVE: block: add the zeroinit block driver filter
From: Wolfgang Bumiller Signed-off-by: Thomas Lamprecht --- block/Makefile.objs | 1 + block/zeroinit.c| 204 2 files changed, 205 insertions(+) create mode 100644 block/zeroinit.c diff --git a/block/Makefile.objs b/block/Makefile.objs index e394fe0b6c..a10ceabf5b 100644 --- a/block/Makefile.objs +++ b/block/Makefile.objs @@ -11,6 +11,7 @@ block-obj-$(CONFIG_QED) += qed.o qed-l2-cache.o qed-table.o qed-cluster.o block-obj-$(CONFIG_QED) += qed-check.o block-obj-y += vhdx.o vhdx-endian.o vhdx-log.o block-obj-y += quorum.o +block-obj-y += zeroinit.o block-obj-y += blkdebug.o blkverify.o blkreplay.o block-obj-$(CONFIG_PARALLELS) += parallels.o block-obj-y += blklogwrites.o diff --git a/block/zeroinit.c b/block/zeroinit.c new file mode 100644 index 00..b74a78ece6 --- /dev/null +++ b/block/zeroinit.c @@ -0,0 +1,204 @@ +/* + * Filter to fake a zero-initialized block device. + * + * Copyright (c) 2016 Wolfgang Bumiller + * Copyright (c) 2016 Proxmox Server Solutions GmbH + * + * This work is licensed under the terms of the GNU GPL, version 2 or later. + * See the COPYING file in the top-level directory. + */ + +#include "qemu/osdep.h" +#include "qapi/error.h" +#include "block/block_int.h" +#include "qapi/qmp/qdict.h" +#include "qapi/qmp/qstring.h" +#include "qemu/cutils.h" +#include "qemu/option.h" +#include "qemu/module.h" + +typedef struct { +bool has_zero_init; +int64_t extents; +} BDRVZeroinitState; + +/* Valid blkverify filenames look like blkverify:path/to/raw_image:path/to/image */ +static void zeroinit_parse_filename(const char *filename, QDict *options, + Error **errp) +{ +QString *raw_path; + +/* Parse the blkverify: prefix */ +if (!strstart(filename, "zeroinit:", )) { +/* There was no prefix; therefore, all options have to be already + present in the QDict (except for the filename) */ +return; +} + +raw_path = qstring_from_str(filename); +qdict_put(options, "x-next", raw_path); +} + +static QemuOptsList runtime_opts = { +.name = "zeroinit", +.head = QTAILQ_HEAD_INITIALIZER(runtime_opts.head), +.desc = { +{ +.name = "x-next", +.type = QEMU_OPT_STRING, +.help = "[internal use only, will be removed]", +}, +{ +.name = "x-zeroinit", +.type = QEMU_OPT_BOOL, +.help = "set has_initialized_zero flag", +}, +{ /* end of list */ } +}, +}; + +static int zeroinit_open(BlockDriverState *bs, QDict *options, int flags, + Error **errp) +{ +BDRVZeroinitState *s = bs->opaque; +QemuOpts *opts; +Error *local_err = NULL; +int ret; + +s->extents = 0; + +opts = qemu_opts_create(_opts, NULL, 0, _abort); +qemu_opts_absorb_qdict(opts, options, _err); +if (local_err) { +error_propagate(errp, local_err); +ret = -EINVAL; +goto fail; +} + +/* Open the raw file */ +bs->file = bdrv_open_child(qemu_opt_get(opts, "x-next"), options, "next", + bs, _file, false, _err); +if (local_err) { +ret = -EINVAL; +error_propagate(errp, local_err); +goto fail; +} + +/* set the options */ +s->has_zero_init = qemu_opt_get_bool(opts, "x-zeroinit", true); + +ret = 0; +fail: +if (ret < 0) { +bdrv_unref_child(bs, bs->file); +} +qemu_opts_del(opts); +return ret; +} + +static void zeroinit_close(BlockDriverState *bs) +{ +BDRVZeroinitState *s = bs->opaque; +(void)s; +} + +static int64_t zeroinit_getlength(BlockDriverState *bs) +{ +return bdrv_getlength(bs->file->bs); +} + +static int coroutine_fn zeroinit_co_preadv(BlockDriverState *bs, +uint64_t offset, uint64_t bytes, QEMUIOVector *qiov, int flags) +{ +return bdrv_co_preadv(bs->file, offset, bytes, qiov, flags); +} + +static int coroutine_fn zeroinit_co_pwrite_zeroes(BlockDriverState *bs, int64_t offset, + int count, BdrvRequestFlags flags) +{ +BDRVZeroinitState *s = bs->opaque; +if (offset >= s->extents) +return 0; +return bdrv_pwrite_zeroes(bs->file, offset, count, flags); +} + +static int coroutine_fn zeroinit_co_pwritev(BlockDriverState *bs, +uint64_t offset, uint64_t bytes, QEMUIOVector *qiov, int flags) +{ +BDRVZeroinitState *s = bs->opaque; +int64_t extents = offset + bytes; +if (extents > s->extents) +s->extents = extents; +return bdrv_co_pwritev(bs->file, offset, bytes, qiov, flags); +} + +static bool zeroinit_recurse_is_first_non_filter(BlockDriverState *bs, + BlockDriverState *candidate) +{ +return bdrv_recurse_is_first_non_filter(bs->file->bs, candidate); +} + +static coroutine_fn int zeroinit_co_flush(BlockDriverState *bs) +{ +return
[pve-devel] [PATCH 20/31] PVE: Add dummy -id command line parameter
From: Wolfgang Bumiller This used to be part of the qemu-side PVE authentication for VNC. Now this does nothing. Signed-off-by: Wolfgang Bumiller Signed-off-by: Thomas Lamprecht --- qemu-options.hx | 3 +++ vl.c| 8 2 files changed, 11 insertions(+) diff --git a/qemu-options.hx b/qemu-options.hx index 4cb2681bfc..b84e260fa5 100644 --- a/qemu-options.hx +++ b/qemu-options.hx @@ -826,6 +826,9 @@ STEXI @table @option ETEXI +DEF("id", HAS_ARG, QEMU_OPTION_id, +"-id n set the VMID", QEMU_ARCH_ALL) + DEF("fda", HAS_ARG, QEMU_OPTION_fda, "-fda/-fdb file use 'file' as floppy disk 0/1 image\n", QEMU_ARCH_ALL) DEF("fdb", HAS_ARG, QEMU_OPTION_fdb, "", QEMU_ARCH_ALL) diff --git a/vl.c b/vl.c index 1616f55a38..4df15640c5 100644 --- a/vl.c +++ b/vl.c @@ -2828,6 +2828,7 @@ static void user_register_global_props(void) int main(int argc, char **argv, char **envp) { int i; +long vm_id; int snapshot, linux_boot; const char *initrd_filename; const char *kernel_filename, *kernel_cmdline; @@ -3560,6 +3561,13 @@ int main(int argc, char **argv, char **envp) exit(1); } break; +case QEMU_OPTION_id: +vm_id = strtol(optarg, (char **), 10); +if (*optarg != 0 || vm_id < 100 || vm_id > INT_MAX) { +error_report("invalid -id argument %s", optarg); +exit(1); +} +break; case QEMU_OPTION_vnc: vnc_parse(optarg, _fatal); break; -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH 27/31] PVE: [Compat]: 4.0 used balloon qemu-4-0-config-size false here
From: Thomas Lamprecht The underlying issue why this change from upstream to us arised in the first place is that QEMU 4.0 was already released at the point we run into this migration issue, so we did the then obvious fallback to false for virtio-balloon-device qemu-4-0-config-size. QEMU made that switch back in 4.1, where it now uses a backward compatible mechanism to detect if the bigger CFG sizes should be used, i.e., checking the VIRTIO_BALLOON_F_PAGE_POISON or VIRTIO_BALLOON_F_FREE_PAGE_HINT balloon feature flags. As for them, upstream released version 4.0 had this to true they keep it to true in their compatibility record for the 4.0 machine, to allow live migrations from 4.0 to 4.1. As for us, downstream released version 4.0 (first public release of this QEMU) had this to false, we change it back to false again, for the same reason. Signed-off-by: Thomas Lamprecht --- hw/core/machine.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/hw/core/machine.c b/hw/core/machine.c index 1689ad3bf8..bdcb351ede 100644 --- a/hw/core/machine.c +++ b/hw/core/machine.c @@ -39,7 +39,8 @@ GlobalProperty hw_compat_4_0[] = { { "virtio-vga", "edid", "false" }, { "virtio-gpu", "edid", "false" }, { "virtio-device", "use-started", "false" }, -{ "virtio-balloon-device", "qemu-4-0-config-size", "true" }, +// PVE differed from upstream for 4.0 balloon cfg size +{ "virtio-balloon-device", "qemu-4-0-config-size", "false" }, { "pl031", "migrate-tick-offset", "false" }, }; const size_t hw_compat_4_0_len = G_N_ELEMENTS(hw_compat_4_0); -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH 29/31] PVE-Backup - add vma code
--- Makefile | 3 +- Makefile.objs | 1 + vma-reader.c | 857 ++ vma-writer.c | 771 + vma.c | 837 vma.h | 150 + 6 files changed, 2618 insertions(+), 1 deletion(-) create mode 100644 vma-reader.c create mode 100644 vma-writer.c create mode 100644 vma.c create mode 100644 vma.h diff --git a/Makefile b/Makefile index b437a346d7..18d2dba2e4 100644 --- a/Makefile +++ b/Makefile @@ -453,7 +453,7 @@ dummy := $(call unnest-vars,, \ include $(SRC_PATH)/tests/Makefile.include -all: $(DOCS) $(if $(BUILD_DOCS),sphinxdocs) $(TOOLS) $(HELPERS-y) recurse-all modules $(vhost-user-json-y) +all: $(DOCS) $(if $(BUILD_DOCS),sphinxdocs) $(TOOLS) vma$(EXESUF) $(HELPERS-y) recurse-all modules $(vhost-user-json-y) qemu-version.h: FORCE $(call quiet-command, \ @@ -567,6 +567,7 @@ qemu-img.o: qemu-img-cmds.h qemu-img$(EXESUF): qemu-img.o $(authz-obj-y) $(block-obj-y) $(crypto-obj-y) $(io-obj-y) $(qom-obj-y) $(COMMON_LDADDS) qemu-nbd$(EXESUF): qemu-nbd.o $(authz-obj-y) $(block-obj-y) $(crypto-obj-y) $(io-obj-y) $(qom-obj-y) $(COMMON_LDADDS) qemu-io$(EXESUF): qemu-io.o $(authz-obj-y) $(block-obj-y) $(crypto-obj-y) $(io-obj-y) $(qom-obj-y) $(COMMON_LDADDS) +vma$(EXESUF): vma.o vma-reader.o $(authz-obj-y) $(block-obj-y) $(crypto-obj-y) $(io-obj-y) $(qom-obj-y) $(COMMON_LDADDS) qemu-bridge-helper$(EXESUF): qemu-bridge-helper.o $(COMMON_LDADDS) diff --git a/Makefile.objs b/Makefile.objs index f97b40f232..db7fbbe73b 100644 --- a/Makefile.objs +++ b/Makefile.objs @@ -18,6 +18,7 @@ block-obj-y += block.o blockjob.o job.o block-obj-y += block/ scsi/ block-obj-y += qemu-io-cmds.o block-obj-$(CONFIG_REPLICATION) += replication.o +block-obj-y += vma-writer.o block-obj-m = block/ diff --git a/vma-reader.c b/vma-reader.c new file mode 100644 index 00..2b1d1cdab3 --- /dev/null +++ b/vma-reader.c @@ -0,0 +1,857 @@ +/* + * VMA: Virtual Machine Archive + * + * Copyright (C) 2012 Proxmox Server Solutions + * + * Authors: + * Dietmar Maurer (diet...@proxmox.com) + * + * This work is licensed under the terms of the GNU GPL, version 2 or later. + * See the COPYING file in the top-level directory. + * + */ + +#include "qemu/osdep.h" +#include +#include + +#include "qemu-common.h" +#include "qemu/timer.h" +#include "qemu/ratelimit.h" +#include "vma.h" +#include "block/block.h" +#include "sysemu/block-backend.h" + +static unsigned char zero_vma_block[VMA_BLOCK_SIZE]; + +typedef struct VmaRestoreState { +BlockBackend *target; +bool write_zeroes; +unsigned long *bitmap; +int bitmap_size; +} VmaRestoreState; + +struct VmaReader { +int fd; +GChecksum *md5csum; +GHashTable *blob_hash; +unsigned char *head_data; +VmaDeviceInfo devinfo[256]; +VmaRestoreState rstate[256]; +GList *cdata_list; +guint8 vmstate_stream; +uint32_t vmstate_clusters; +/* to show restore percentage if run with -v */ +time_t start_time; +int64_t cluster_count; +int64_t clusters_read; +int64_t zero_cluster_data; +int64_t partial_zero_cluster_data; +int clusters_read_per; +}; + +static guint +g_int32_hash(gconstpointer v) +{ +return *(const uint32_t *)v; +} + +static gboolean +g_int32_equal(gconstpointer v1, gconstpointer v2) +{ +return *((const uint32_t *)v1) == *((const uint32_t *)v2); +} + +static int vma_reader_get_bitmap(VmaRestoreState *rstate, int64_t cluster_num) +{ +assert(rstate); +assert(rstate->bitmap); + +unsigned long val, idx, bit; + +idx = cluster_num / BITS_PER_LONG; + +assert(rstate->bitmap_size > idx); + +bit = cluster_num % BITS_PER_LONG; +val = rstate->bitmap[idx]; + +return !!(val & (1UL << bit)); +} + +static void vma_reader_set_bitmap(VmaRestoreState *rstate, int64_t cluster_num, + int dirty) +{ +assert(rstate); +assert(rstate->bitmap); + +unsigned long val, idx, bit; + +idx = cluster_num / BITS_PER_LONG; + +assert(rstate->bitmap_size > idx); + +bit = cluster_num % BITS_PER_LONG; +val = rstate->bitmap[idx]; +if (dirty) { +if (!(val & (1UL << bit))) { +val |= 1UL << bit; +} +} else { +if (val & (1UL << bit)) { +val &= ~(1UL << bit); +} +} +rstate->bitmap[idx] = val; +} + +typedef struct VmaBlob { +uint32_t start; +uint32_t len; +void *data; +} VmaBlob; + +static const VmaBlob *get_header_blob(VmaReader *vmar, uint32_t pos) +{ +assert(vmar); +assert(vmar->blob_hash); + +return g_hash_table_lookup(vmar->blob_hash, ); +} + +static const char *get_header_str(VmaReader *vmar, uint32_t pos) +{ +const VmaBlob *blob = g
[pve-devel] [PATCH 23/31] PVE: savevm-async: kick AIO wait on block state write
From: Thomas Lamprecht Signed-off-by: Thomas Lamprecht --- savevm-async.c | 1 + 1 file changed, 1 insertion(+) diff --git a/savevm-async.c b/savevm-async.c index 5a20009b9a..e4bb0d24b2 100644 --- a/savevm-async.c +++ b/savevm-async.c @@ -157,6 +157,7 @@ static void coroutine_fn block_state_write_entry(void *opaque) { BlkRwCo *rwco = opaque; rwco->ret = blk_co_pwritev(snap_state.target, rwco->offset, rwco->qiov->size, rwco->qiov, 0); +aio_wait_kick(); } static ssize_t block_state_writev_buffer(void *opaque, struct iovec *iov, -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH 26/31] PVE: Acquire aio_context before calling block_job_add_bdrv
From: Stefan Reiter Otherwise backups immediately fail with 'permission denied' since _add_bdrv tries to release a lock we don't own. Signed-off-by: Stefan Reiter --- blockjob.c | 10 ++ 1 file changed, 10 insertions(+) diff --git a/blockjob.c b/blockjob.c index c6e20e2fcd..4e6074f18c 100644 --- a/blockjob.c +++ b/blockjob.c @@ -436,10 +436,20 @@ void *block_job_create(const char *job_id, const BlockJobDriver *driver, notifier_list_add(>job.on_ready, >ready_notifier); notifier_list_add(>job.on_idle, >idle_notifier); +/* block_job_add_bdrv expects us to hold the aio context lock, so acquire it + * before calling if we're not in the main context anyway. */ +if (job->job.aio_context != qemu_get_aio_context()) { +aio_context_acquire(job->job.aio_context); +} + error_setg(>blocker, "block device is in use by block job: %s", job_type_str(>job)); block_job_add_bdrv(job, "main node", bs, 0, BLK_PERM_ALL, _abort); +if (job->job.aio_context != qemu_get_aio_context()) { +aio_context_release(job->job.aio_context); +} + bdrv_op_unblock(bs, BLOCK_OP_TYPE_DATAPLANE, job->blocker); /* Disable request queuing in the BlockBackend to avoid deadlocks on drain: -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH 24/31] PVE: move snapshot cleanup into bottom half
From: Wolfgang Bumiller as per: (0ceccd858a8d) migration: qemu_savevm_state_cleanup() in cleanup may affect held locks and therefore change assumptions made by that function! Signed-off-by: Wolfgang Bumiller Signed-off-by: Thomas Lamprecht --- savevm-async.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/savevm-async.c b/savevm-async.c index e4bb0d24b2..10837fc858 100644 --- a/savevm-async.c +++ b/savevm-async.c @@ -200,6 +200,8 @@ static void process_savevm_cleanup(void *opaque) int ret; qemu_bh_delete(snap_state.cleanup_bh); snap_state.cleanup_bh = NULL; +qemu_savevm_state_cleanup(); + qemu_mutex_unlock_iothread(); qemu_thread_join(_state.thread); qemu_mutex_lock_iothread(); @@ -276,7 +278,6 @@ static void *process_savevm_thread(void *opaque) save_snapshot_error("qemu_savevm_state_iterate error %d", ret); break; } -qemu_savevm_state_cleanup(); DPRINTF("save complete\n"); break; } -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH 22/31] PVE: [Up+Config] file-posix: make locking optiono on create
From: Wolfgang Bumiller Otherwise creating images on nfs/cifs can be problematic. Signed-off-by: Wolfgang Bumiller Signed-off-by: Thomas Lamprecht --- block/file-posix.c | 61 +--- qapi/block-core.json | 3 ++- 2 files changed, 43 insertions(+), 21 deletions(-) diff --git a/block/file-posix.c b/block/file-posix.c index 44b49265ae..0722b0f529 100644 --- a/block/file-posix.c +++ b/block/file-posix.c @@ -2250,6 +2250,7 @@ raw_co_create(BlockdevCreateOptions *options, Error **errp) int fd; uint64_t perm, shared; int result = 0; +bool locked = false; /* Validate options and set default values */ assert(options->driver == BLOCKDEV_DRIVER_FILE); @@ -2283,19 +2284,22 @@ raw_co_create(BlockdevCreateOptions *options, Error **errp) perm = BLK_PERM_WRITE | BLK_PERM_RESIZE; shared = BLK_PERM_ALL & ~BLK_PERM_RESIZE; -/* Step one: Take locks */ -result = raw_apply_lock_bytes(NULL, fd, perm, ~shared, false, errp); -if (result < 0) { -goto out_close; -} +if (file_opts->locking != ON_OFF_AUTO_OFF) { +/* Step one: Take locks */ +result = raw_apply_lock_bytes(NULL, fd, perm, ~shared, false, errp); +if (result < 0) { +goto out_close; +} +locked = true; -/* Step two: Check that nobody else has taken conflicting locks */ -result = raw_check_lock_bytes(fd, perm, shared, errp); -if (result < 0) { -error_append_hint(errp, - "Is another process using the image [%s]?\n", - file_opts->filename); -goto out_unlock; +/* Step two: Check that nobody else has taken conflicting locks */ +result = raw_check_lock_bytes(fd, perm, shared, errp); +if (result < 0) { +error_append_hint(errp, + "Is another process using the image [%s]?\n", + file_opts->filename); +goto out_unlock; +} } /* Clear the file by truncating it to 0 */ @@ -2328,13 +2332,15 @@ raw_co_create(BlockdevCreateOptions *options, Error **errp) } out_unlock: -raw_apply_lock_bytes(NULL, fd, 0, 0, true, _err); -if (local_err) { -/* The above call should not fail, and if it does, that does - * not mean the whole creation operation has failed. So - * report it the user for their convenience, but do not report - * it to the caller. */ -warn_report_err(local_err); +if (locked) { +raw_apply_lock_bytes(NULL, fd, 0, 0, true, _err); +if (local_err) { +/* The above call should not fail, and if it does, that does + * not mean the whole creation operation has failed. So + * report it the user for their convenience, but do not report + * it to the caller. */ +warn_report_err(local_err); +} } out_close: @@ -2355,6 +2361,7 @@ static int coroutine_fn raw_co_create_opts(const char *filename, QemuOpts *opts, PreallocMode prealloc; char *buf = NULL; Error *local_err = NULL; +OnOffAuto locking; /* Skip file: protocol prefix */ strstart(filename, "file:", ); @@ -2372,6 +2379,18 @@ static int coroutine_fn raw_co_create_opts(const char *filename, QemuOpts *opts, return -EINVAL; } +locking = qapi_enum_parse(_lookup, + qemu_opt_get(opts, "locking"), + ON_OFF_AUTO_AUTO, _err); +if (local_err) { +error_propagate(errp, local_err); +return -EINVAL; +} + +if (locking == ON_OFF_AUTO_AUTO) { +locking = ON_OFF_AUTO_OFF; +} + options = (BlockdevCreateOptions) { .driver = BLOCKDEV_DRIVER_FILE, .u.file = { @@ -2381,6 +2400,8 @@ static int coroutine_fn raw_co_create_opts(const char *filename, QemuOpts *opts, .preallocation = prealloc, .has_nocow = true, .nocow = nocow, +.has_locking= true, +.locking= locking, }, }; return raw_co_create(, errp); @@ -2901,7 +2922,7 @@ static int raw_check_perm(BlockDriverState *bs, uint64_t perm, uint64_t shared, } /* Copy locks to the new fd */ -if (s->perm_change_fd) { +if (s->use_lock && s->perm_change_fd) { ret = raw_apply_lock_bytes(NULL, s->perm_change_fd, perm, ~shared, false, errp); if (ret < 0) { diff --git a/qapi/block-core.json b/qapi/block-core.json index 0cf68fea14..783a868eb2 100644 --- a/qapi/block-core.json +++ b/qapi/block-core.json @@ -4259,7 +4259,8 @@ 'data': { 'filename': 'str', 'size': 'size', '*preallocation': 'PreallocMode', -'*nocow': 'bool' } } +'*nocow': 'bool', +
[pve-devel] [PATCH 25/31] PVE: monitor: disable oob capability
From: Wolfgang Bumiller A bisect revealed that commit 8258292e18c3 ("monitor: Remove "x-oob", offer capability "oob" unconditionally") causes unexpected hangs when restoring live snapshots from some types of block devices (particularly RBD). We need to figure out what's happnening there. For now, since we had this disabled before and probably don't need it now either, disable oob, so we can get a functioning qemu out... Signed-off-by: Wolfgang Bumiller Signed-off-by: Thomas Lamprecht --- monitor/qmp.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/monitor/qmp.c b/monitor/qmp.c index b67a8e7d1f..8f44fed944 100644 --- a/monitor/qmp.c +++ b/monitor/qmp.c @@ -395,8 +395,7 @@ void monitor_init_qmp(Chardev *chr, bool pretty) MonitorQMP *mon = g_new0(MonitorQMP, 1); /* Note: we run QMP monitor in I/O thread when @chr supports that */ -monitor_data_init(>common, true, false, - qemu_chr_has_feature(chr, QEMU_CHAR_FEATURE_GCONTEXT)); +monitor_data_init(>common, true, false, false); mon->pretty = pretty; -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH 14/31] PVE: virtio-balloon: improve query-balloon
From: Wolfgang Bumiller Actually provide memory information via the query-balloon command. Signed-off-by: Thomas Lamprecht --- hw/virtio/virtio-balloon.c | 33 +++-- monitor/hmp-cmds.c | 30 +- qapi/misc.json | 22 +- 3 files changed, 81 insertions(+), 4 deletions(-) diff --git a/hw/virtio/virtio-balloon.c b/hw/virtio/virtio-balloon.c index 40b04f5180..76e907e628 100644 --- a/hw/virtio/virtio-balloon.c +++ b/hw/virtio/virtio-balloon.c @@ -713,8 +713,37 @@ static uint64_t virtio_balloon_get_features(VirtIODevice *vdev, uint64_t f, static void virtio_balloon_stat(void *opaque, BalloonInfo *info) { VirtIOBalloon *dev = opaque; -info->actual = get_current_ram_size() - ((uint64_t) dev->actual << - VIRTIO_BALLOON_PFN_SHIFT); +ram_addr_t ram_size = get_current_ram_size(); +info->actual = ram_size - ((uint64_t) dev->actual << + VIRTIO_BALLOON_PFN_SHIFT); + +info->max_mem = ram_size; + +if (!(balloon_stats_enabled(dev) && balloon_stats_supported(dev) && + dev->stats_last_update)) { + return; +} + +info->last_update = dev->stats_last_update; +info->has_last_update = true; + +info->mem_swapped_in = dev->stats[VIRTIO_BALLOON_S_SWAP_IN]; +info->has_mem_swapped_in = info->mem_swapped_in >= 0 ? true : false; + +info->mem_swapped_out = dev->stats[VIRTIO_BALLOON_S_SWAP_OUT]; +info->has_mem_swapped_out = info->mem_swapped_out >= 0 ? true : false; + +info->major_page_faults = dev->stats[VIRTIO_BALLOON_S_MAJFLT]; +info->has_major_page_faults = info->major_page_faults >= 0 ? true : false; + +info->minor_page_faults = dev->stats[VIRTIO_BALLOON_S_MINFLT]; +info->has_minor_page_faults = info->minor_page_faults >= 0 ? true : false; + +info->free_mem = dev->stats[VIRTIO_BALLOON_S_MEMFREE]; +info->has_free_mem = info->free_mem >= 0 ? true : false; + +info->total_mem = dev->stats[VIRTIO_BALLOON_S_MEMTOT]; +info->has_total_mem = info->total_mem >= 0 ? true : false; } static void virtio_balloon_to_target(void *opaque, ram_addr_t target) diff --git a/monitor/hmp-cmds.c b/monitor/hmp-cmds.c index b2551c16d1..2e725ed818 100644 --- a/monitor/hmp-cmds.c +++ b/monitor/hmp-cmds.c @@ -854,7 +854,35 @@ void hmp_info_balloon(Monitor *mon, const QDict *qdict) return; } -monitor_printf(mon, "balloon: actual=%" PRId64 "\n", info->actual >> 20); +monitor_printf(mon, "balloon: actual=%" PRId64, info->actual >> 20); +monitor_printf(mon, " max_mem=%" PRId64, info->max_mem >> 20); +if (info->has_total_mem) { +monitor_printf(mon, " total_mem=%" PRId64, info->total_mem >> 20); +} +if (info->has_free_mem) { +monitor_printf(mon, " free_mem=%" PRId64, info->free_mem >> 20); +} + +if (info->has_mem_swapped_in) { +monitor_printf(mon, " mem_swapped_in=%" PRId64, info->mem_swapped_in); +} +if (info->has_mem_swapped_out) { +monitor_printf(mon, " mem_swapped_out=%" PRId64, info->mem_swapped_out); +} +if (info->has_major_page_faults) { +monitor_printf(mon, " major_page_faults=%" PRId64, + info->major_page_faults); +} +if (info->has_minor_page_faults) { +monitor_printf(mon, " minor_page_faults=%" PRId64, + info->minor_page_faults); +} +if (info->has_last_update) { +monitor_printf(mon, " last_update=%" PRId64, + info->last_update); +} + +monitor_printf(mon, "\n"); qapi_free_BalloonInfo(info); } diff --git a/qapi/misc.json b/qapi/misc.json index 33b94e3589..ed65ed27e3 100644 --- a/qapi/misc.json +++ b/qapi/misc.json @@ -408,10 +408,30 @@ # # @actual: the number of bytes the balloon currently contains # +# @last_update: time when stats got updated from guest +# +# @mem_swapped_in: number of pages swapped in within the guest +# +# @mem_swapped_out: number of pages swapped out within the guest +# +# @major_page_faults: number of major page faults within the guest +# +# @minor_page_faults: number of minor page faults within the guest +# +# @free_mem: amount of memory (in bytes) free in the guest +# +# @total_mem: amount of memory (in bytes) visible to the guest +# +# @max_mem: amount of memory (in bytes) assigned to the guest +# # Since: 0.14.0 # ## -{ 'struct': 'BalloonInfo', 'data': {'actual': 'int' } } +{ 'struct': 'BalloonInfo', + 'data': {'actual': 'int', '*last_update': 'int', '*mem_swapped_in': 'int', + '*mem_swapped_out': 'int', '*major_page_faults': 'int', + '*minor_page_faults': 'int', '*free_mem': 'int', + '*total_mem': 'int', 'max_mem': 'int' } } ## # @query-balloon: -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com
[pve-devel] [PATCH 21/31] PVE: [Config] Revert "target-i386: disable LINT0 after reset"
From: Wolfgang Bumiller This reverts commit b8eb5512fd8a115f164edbbe897cdf8884920ccb. Signed-off-by: Thomas Lamprecht --- hw/intc/apic_common.c | 9 + 1 file changed, 9 insertions(+) diff --git a/hw/intc/apic_common.c b/hw/intc/apic_common.c index 375cb6abe9..e7d479c7e9 100644 --- a/hw/intc/apic_common.c +++ b/hw/intc/apic_common.c @@ -259,6 +259,15 @@ static void apic_reset_common(DeviceState *dev) info->vapic_base_update(s); apic_init_reset(dev); + +if (bsp) { +/* + * LINT0 delivery mode on CPU #0 is set to ExtInt at initialization + * time typically by BIOS, so PIC interrupt can be delivered to the + * processor when local APIC is enabled. + */ +s->lvt[APIC_LVT_LINT0] = 0x700; +} } static const VMStateDescription vmstate_apic_common; -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH 11/31] PVE: [Up] qemu-img dd: add osize and read from/to stdin/stdout
From: Wolfgang Bumiller Neither convert nor dd were previously able to write to or read from a pipe. Particularly serializing an image file into a raw stream or vice versa can be useful, but using `qemu-img convert -f qcow2 -O raw foo.qcow2 /dev/stdout` in a pipe will fail trying to seek. While dd and convert have overlapping use cases, `dd` is a simple read/write loop while convert is much more sophisticated and has ways to dealing with holes and blocks of zeroes. Since these typically can't be detected in pipes via SEEK_DATA/HOLE or skipped while writing, dd seems to be the better choice for implementing stdin/stdout streams. This patch causes "if" and "of" to default to stdin and stdout respectively, allowing only the "raw" format to be used in these cases. Since the input can now be a pipe we have no way of detecting the size of the output image to create. Since we also want to support images with a size not matching the dd command's "bs" parameter (which, together with "count" could be used to calculate the desired size, and is already used to limit it), the "osize" option is added to explicitly override the output file's size. Signed-off-by: Wolfgang Bumiller Signed-off-by: Thomas Lamprecht --- qemu-img-cmds.hx | 4 +- qemu-img.c | 192 +-- 2 files changed, 122 insertions(+), 74 deletions(-) diff --git a/qemu-img-cmds.hx b/qemu-img-cmds.hx index 1c93e6d185..8094abb3ee 100644 --- a/qemu-img-cmds.hx +++ b/qemu-img-cmds.hx @@ -56,9 +56,9 @@ STEXI ETEXI DEF("dd", img_dd, -"dd [--image-opts] [-U] [-f fmt] [-O output_fmt] [bs=block_size] [count=blocks] [skip=blocks] if=input of=output") +"dd [--image-opts] [-U] [-f fmt] [-O output_fmt] [bs=block_size] [count=blocks] [skip=blocks] [osize=output_size] if=input of=output") STEXI -@item dd [--image-opts] [-U] [-f @var{fmt}] [-O @var{output_fmt}] [bs=@var{block_size}] [count=@var{blocks}] [skip=@var{blocks}] if=@var{input} of=@var{output} +@item dd [--image-opts] [-U] [-f @var{fmt}] [-O @var{output_fmt}] [bs=@var{block_size}] [count=@var{blocks}] [skip=@var{blocks}] [osize=output_size] if=@var{input} of=@var{output} ETEXI DEF("info", img_info, diff --git a/qemu-img.c b/qemu-img.c index 12211bed76..d2516968c6 100644 --- a/qemu-img.c +++ b/qemu-img.c @@ -4405,10 +4405,12 @@ out: #define C_IF 04 #define C_OF 010 #define C_SKIP020 +#define C_OSIZE 040 struct DdInfo { unsigned int flags; int64_t count; +int64_t osize; }; struct DdIo { @@ -4487,6 +4489,20 @@ static int img_dd_skip(const char *arg, return 0; } +static int img_dd_osize(const char *arg, +struct DdIo *in, struct DdIo *out, +struct DdInfo *dd) +{ +dd->osize = cvtnum(arg); + +if (dd->osize < 0) { +error_report("invalid number: '%s'", arg); +return 1; +} + +return 0; +} + static int img_dd(int argc, char **argv) { int ret = 0; @@ -4527,6 +4543,7 @@ static int img_dd(int argc, char **argv) { "if", img_dd_if, C_IF }, { "of", img_dd_of, C_OF }, { "skip", img_dd_skip, C_SKIP }, +{ "osize", img_dd_osize, C_OSIZE }, { NULL, NULL, 0 } }; const struct option long_options[] = { @@ -4605,8 +4622,13 @@ static int img_dd(int argc, char **argv) arg = NULL; } -if (!(dd.flags & C_IF && dd.flags & C_OF)) { -error_report("Must specify both input and output files"); +if (!(dd.flags & C_IF) && (!fmt || strcmp(fmt, "raw") != 0)) { +error_report("Input format must be raw when readin from stdin"); +ret = -1; +goto out; +} +if (!(dd.flags & C_OF) && strcmp(out_fmt, "raw") != 0) { +error_report("Output format must be raw when writing to stdout"); ret = -1; goto out; } @@ -4618,85 +4640,101 @@ static int img_dd(int argc, char **argv) goto out; } -blk1 = img_open(image_opts, in.filename, fmt, 0, false, false, -force_share); +if (dd.flags & C_IF) { +blk1 = img_open(image_opts, in.filename, fmt, 0, false, false, +force_share); -if (!blk1) { -ret = -1; -goto out; +if (!blk1) { +ret = -1; +goto out; +} } -drv = bdrv_find_format(out_fmt); -if (!drv) { -error_report("Unknown file format"); +if (dd.flags & C_OSIZE) { +size = dd.osize; +} else if (dd.flags & C_IF) { +size = blk_getlength(blk1); +if (size < 0) { +error_report("Failed to get size for '%s'", in.filename); +ret = -1; +goto out; +} +} else if (dd.flags & C_COUNT) { +size = dd.count * in.bsz; +} else { +error_report("Output size must be known when reading from stdin"); ret = -1; goto out; } -proto_drv = bdrv_find_protocol(out.filename,
[pve-devel] [PATCH 17/31] PVE: internal snapshot async
Signed-off-by: Thomas Lamprecht Signed-off-by: Dietmar Maurer --- Makefile.objs| 1 + hmp-commands-info.hx | 13 + hmp-commands.hx | 32 +++ include/migration/snapshot.h | 1 + include/monitor/hmp.h| 5 + monitor/hmp-cmds.c | 57 + qapi/migration.json | 34 +++ qapi/misc.json | 32 +++ qemu-options.hx | 13 + savevm-async.c | 463 +++ vl.c | 10 + 11 files changed, 661 insertions(+) create mode 100644 savevm-async.c diff --git a/Makefile.objs b/Makefile.objs index 11ba1a36bd..f97b40f232 100644 --- a/Makefile.objs +++ b/Makefile.objs @@ -48,6 +48,7 @@ common-obj-y += bootdevice.o iothread.o common-obj-y += dump/ common-obj-y += job-qmp.o common-obj-y += monitor/ +common-obj-y += savevm-async.o common-obj-y += net/ common-obj-y += qdev-monitor.o device-hotplug.o common-obj-$(CONFIG_WIN32) += os-win32.o diff --git a/hmp-commands-info.hx b/hmp-commands-info.hx index 257ee7d7a3..139e673bea 100644 --- a/hmp-commands-info.hx +++ b/hmp-commands-info.hx @@ -608,6 +608,19 @@ STEXI @item info migrate_cache_size @findex info migrate_cache_size Show current migration xbzrle cache size. +ETEXI + +{ +.name = "savevm", +.args_type = "", +.params = "", +.help = "show savevm status", +.cmd = hmp_info_savevm, +}, + +STEXI +@item info savevm +show savevm status ETEXI { diff --git a/hmp-commands.hx b/hmp-commands.hx index cfcc044ce4..104288322d 100644 --- a/hmp-commands.hx +++ b/hmp-commands.hx @@ -1945,3 +1945,35 @@ ETEXI STEXI @end table ETEXI + +{ +.name = "savevm-start", +.args_type = "statefile:s?", +.params = "[statefile]", +.help = "Prepare for snapshot and halt VM. Save VM state to statefile.", +.cmd = hmp_savevm_start, +}, + +{ +.name = "snapshot-drive", +.args_type = "device:s,name:s", +.params = "device name", +.help = "Create internal snapshot.", +.cmd = hmp_snapshot_drive, +}, + +{ +.name = "delete-drive-snapshot", +.args_type = "device:s,name:s", +.params = "device name", +.help = "Delete internal snapshot.", +.cmd = hmp_delete_drive_snapshot, +}, + +{ +.name = "savevm-end", +.args_type = "", +.params = "", +.help = "Resume VM after snaphot.", +.cmd = hmp_savevm_end, +}, diff --git a/include/migration/snapshot.h b/include/migration/snapshot.h index c85b6ec75b..4411b7121d 100644 --- a/include/migration/snapshot.h +++ b/include/migration/snapshot.h @@ -17,5 +17,6 @@ int save_snapshot(const char *name, Error **errp); int load_snapshot(const char *name, Error **errp); +int load_snapshot_from_blockdev(const char *filename, Error **errp); #endif diff --git a/include/monitor/hmp.h b/include/monitor/hmp.h index a0e9511440..c6ee8295f0 100644 --- a/include/monitor/hmp.h +++ b/include/monitor/hmp.h @@ -25,6 +25,7 @@ void hmp_info_status(Monitor *mon, const QDict *qdict); void hmp_info_uuid(Monitor *mon, const QDict *qdict); void hmp_info_chardev(Monitor *mon, const QDict *qdict); void hmp_info_mice(Monitor *mon, const QDict *qdict); +void hmp_info_savevm(Monitor *mon, const QDict *qdict); void hmp_info_migrate(Monitor *mon, const QDict *qdict); void hmp_info_migrate_capabilities(Monitor *mon, const QDict *qdict); void hmp_info_migrate_parameters(Monitor *mon, const QDict *qdict); @@ -102,6 +103,10 @@ void hmp_netdev_add(Monitor *mon, const QDict *qdict); void hmp_netdev_del(Monitor *mon, const QDict *qdict); void hmp_getfd(Monitor *mon, const QDict *qdict); void hmp_closefd(Monitor *mon, const QDict *qdict); +void hmp_savevm_start(Monitor *mon, const QDict *qdict); +void hmp_snapshot_drive(Monitor *mon, const QDict *qdict); +void hmp_delete_drive_snapshot(Monitor *mon, const QDict *qdict); +void hmp_savevm_end(Monitor *mon, const QDict *qdict); void hmp_sendkey(Monitor *mon, const QDict *qdict); void hmp_screendump(Monitor *mon, const QDict *qdict); void hmp_nbd_server_start(Monitor *mon, const QDict *qdict); diff --git a/monitor/hmp-cmds.c b/monitor/hmp-cmds.c index 2e725ed818..90aa34be25 100644 --- a/monitor/hmp-cmds.c +++ b/monitor/hmp-cmds.c @@ -2607,6 +2607,63 @@ void hmp_info_memory_devices(Monitor *mon, const QDict *qdict) hmp_handle_error(mon, ); } +void hmp_savevm_start(Monitor *mon, const QDict *qdict) +{ +Error *errp = NULL; +const char *statefile = qdict_get_try_str(qdict, "statefile"); + +qmp_savevm_start(statefile != NULL, statefile, ); +
[pve-devel] [PATCH 28/31] PVE: Allow version code in machine type
E.g. pc-i440fx-4.0+pve3 would print 'pve3' as version code while selecting pc-i440fx-4.0 as machine type. Version is made available as 'pve-version' in query-machines (same as, and only if 'is-current'). Signed-off-by: Stefan Reiter --- hw/core/machine-qmp-cmds.c | 6 ++ include/hw/boards.h| 2 ++ qapi/machine.json | 3 ++- vl.c | 15 ++- 4 files changed, 24 insertions(+), 2 deletions(-) diff --git a/hw/core/machine-qmp-cmds.c b/hw/core/machine-qmp-cmds.c index 1953633e82..ca8c0dc53d 100644 --- a/hw/core/machine-qmp-cmds.c +++ b/hw/core/machine-qmp-cmds.c @@ -234,6 +234,12 @@ MachineInfoList *qmp_query_machines(Error **errp) if (strcmp(mc->name, MACHINE_GET_CLASS(current_machine)->name) == 0) { info->has_is_current = true; info->is_current = true; + +// PVE version string only exists for current machine +if (mc->pve_version) { +info->has_pve_version = true; +info->pve_version = g_strdup(mc->pve_version); +} } if (mc->default_cpu_type) { diff --git a/include/hw/boards.h b/include/hw/boards.h index de45087f34..e24d2134c0 100644 --- a/include/hw/boards.h +++ b/include/hw/boards.h @@ -185,6 +185,8 @@ struct MachineClass { const char *desc; const char *deprecation_reason; +const char *pve_version; + void (*init)(MachineState *state); void (*reset)(MachineState *state); void (*wakeup)(MachineState *state); diff --git a/qapi/machine.json b/qapi/machine.json index cbdb6f6d66..a2bd4dd304 100644 --- a/qapi/machine.json +++ b/qapi/machine.json @@ -359,7 +359,8 @@ 'data': { 'name': 'str', '*alias': 'str', '*is-default': 'bool', '*is-current': 'bool', 'cpu-max': 'int', 'hotpluggable-cpus': 'bool', 'numa-mem-supported': 'bool', -'deprecated': 'bool', '*default-cpu-type': 'str' } } +'deprecated': 'bool', '*default-cpu-type': 'str', +'*pve-version': 'str' } } ## # @query-machines: diff --git a/vl.c b/vl.c index 4df15640c5..e7f3ce7607 100644 --- a/vl.c +++ b/vl.c @@ -2475,6 +2475,8 @@ static MachineClass *machine_parse(const char *name, GSList *machines) { MachineClass *mc; GSList *el; +size_t pvever_index = 0; +gchar *name_clean; if (is_help_option(name)) { printf("Supported machines are:\n"); @@ -2491,12 +2493,23 @@ static MachineClass *machine_parse(const char *name, GSList *machines) exit(0); } -mc = find_machine(name, machines); +// PVE version is specified with '+' as seperator, e.g. pc-i440fx+pvever +pvever_index = strcspn(name, "+"); + +name_clean = g_strndup(name, pvever_index); +mc = find_machine(name_clean, machines); +g_free(name_clean); + if (!mc) { error_report("unsupported machine type"); error_printf("Use -machine help to list supported machines\n"); exit(1); } + +if (pvever_index < strlen(name)) { +mc->pve_version = [pvever_index+1]; +} + return mc; } -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH 15/31] PVE: qapi: modify query machines
provide '*is-current' in MachineInfo struct Signed-off-by: Thomas Lamprecht Signed-off-by: Dietmar Maurer --- hw/core/machine-qmp-cmds.c | 6 ++ qapi/machine.json | 4 +++- 2 files changed, 9 insertions(+), 1 deletion(-) diff --git a/hw/core/machine-qmp-cmds.c b/hw/core/machine-qmp-cmds.c index eed5aeb2f7..1953633e82 100644 --- a/hw/core/machine-qmp-cmds.c +++ b/hw/core/machine-qmp-cmds.c @@ -230,6 +230,12 @@ MachineInfoList *qmp_query_machines(Error **errp) info->hotpluggable_cpus = mc->has_hotpluggable_cpus; info->numa_mem_supported = mc->numa_mem_supported; info->deprecated = !!mc->deprecation_reason; + +if (strcmp(mc->name, MACHINE_GET_CLASS(current_machine)->name) == 0) { +info->has_is_current = true; +info->is_current = true; +} + if (mc->default_cpu_type) { info->default_cpu_type = g_strdup(mc->default_cpu_type); info->has_default_cpu_type = true; diff --git a/qapi/machine.json b/qapi/machine.json index ca26779f1a..cbdb6f6d66 100644 --- a/qapi/machine.json +++ b/qapi/machine.json @@ -336,6 +336,8 @@ # # @is-default: whether the machine is default # +# @is-current: whether this machine is currently used +# # @cpu-max: maximum number of CPUs supported by the machine type # (since 1.5.0) # @@ -355,7 +357,7 @@ ## { 'struct': 'MachineInfo', 'data': { 'name': 'str', '*alias': 'str', -'*is-default': 'bool', 'cpu-max': 'int', +'*is-default': 'bool', '*is-current': 'bool', 'cpu-max': 'int', 'hotpluggable-cpus': 'bool', 'numa-mem-supported': 'bool', 'deprecated': 'bool', '*default-cpu-type': 'str' } } -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH 07/31] PVE: [Config] rbd: block: rbd: disable rbd_cache_writethrough_until_flush
From: Wolfgang Bumiller Either the cache mode asks for a cache or not. There's no point in having a "temporary" cache mode. This option AFAIK was introduced as a hack for ancient virtio drivers. If anything, we should have a separate option for it. Better yet, VMs affected by the related issue should simply explicitly choose writethrough. Signed-off-by: Wolfgang Bumiller Signed-off-by: Thomas Lamprecht --- block/rbd.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/block/rbd.c b/block/rbd.c index 027cbcc695..3ac7ff7bd5 100644 --- a/block/rbd.c +++ b/block/rbd.c @@ -637,6 +637,8 @@ static int qemu_rbd_connect(rados_t *cluster, rados_ioctx_t *io_ctx, rados_conf_set(*cluster, "rbd_cache", "false"); } +rados_conf_set(*cluster, "rbd_cache_writethrough_until_flush", "false"); + r = rados_connect(*cluster); if (r < 0) { error_setg_errno(errp, -r, "error connecting"); -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH 05/31] PVE: [Config] smm_available = false
From: Alexandre Derumier Signed-off-by: Alexandre Derumier Signed-off-by: Thomas Lamprecht --- hw/i386/pc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/hw/i386/pc.c b/hw/i386/pc.c index ac08e63604..4bd9ab52a0 100644 --- a/hw/i386/pc.c +++ b/hw/i386/pc.c @@ -2040,7 +2040,7 @@ bool pc_machine_is_smm_enabled(PCMachineState *pcms) if (tcg_enabled() || qtest_enabled()) { smm_available = true; } else if (kvm_enabled()) { -smm_available = kvm_has_smm(); +smm_available = false; } if (smm_available) { -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH 06/31] PVE: [Config] glusterfs: no default logfile if daemonized
From: Wolfgang Bumiller Signed-off-by: Thomas Lamprecht --- block/gluster.c | 15 +++ 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/block/gluster.c b/block/gluster.c index 4fa4a77a47..bfb57ba098 100644 --- a/block/gluster.c +++ b/block/gluster.c @@ -42,7 +42,7 @@ #define GLUSTER_DEBUG_DEFAULT 4 #define GLUSTER_DEBUG_MAX 9 #define GLUSTER_OPT_LOGFILE "logfile" -#define GLUSTER_LOGFILE_DEFAULT "-" /* handled in libgfapi as /dev/stderr */ +#define GLUSTER_LOGFILE_DEFAULT NULL /* * Several versions of GlusterFS (3.12? -> 6.0.1) fail when the transfer size * is greater or equal to 1024 MiB, so we are limiting the transfer size to 512 @@ -424,6 +424,7 @@ static struct glfs *qemu_gluster_glfs_init(BlockdevOptionsGluster *gconf, int old_errno; SocketAddressList *server; unsigned long long port; +const char *logfile; glfs = glfs_find_preopened(gconf->volume); if (glfs) { @@ -466,9 +467,15 @@ static struct glfs *qemu_gluster_glfs_init(BlockdevOptionsGluster *gconf, } } -ret = glfs_set_logging(glfs, gconf->logfile, gconf->debug); -if (ret < 0) { -goto out; +logfile = gconf->logfile; +if (!logfile && !is_daemonized()) { +logfile = "-"; +} +if (logfile) { +ret = glfs_set_logging(glfs, logfile, gconf->debug); +if (ret < 0) { +goto out; +} } ret = glfs_init(glfs); -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH 10/31] PVE: [Up] qemu-img: return success on info without snapshots
From: Wolfgang Bumiller Signed-off-by: Thomas Lamprecht --- qemu-img.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/qemu-img.c b/qemu-img.c index 95a24b9762..12211bed76 100644 --- a/qemu-img.c +++ b/qemu-img.c @@ -2791,7 +2791,8 @@ static int img_info(int argc, char **argv) list = collect_image_info_list(image_opts, filename, fmt, chain, force_share); if (!list) { -return 1; + // return success if snapshot does not exist +return 0; } switch (output_format) { -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH 12/31] PVE: [Up] qemu-img dd: add isize parameter
From: Wolfgang Bumiller for writing small images from stdin to bigger ones In order to distinguish between an actually unexpected and an expected end of input. Signed-off-by: Wolfgang Bumiller Signed-off-by: Thomas Lamprecht --- qemu-img.c | 29 ++--- 1 file changed, 26 insertions(+), 3 deletions(-) diff --git a/qemu-img.c b/qemu-img.c index d2516968c6..8da1ea3951 100644 --- a/qemu-img.c +++ b/qemu-img.c @@ -4406,11 +4406,13 @@ out: #define C_OF 010 #define C_SKIP020 #define C_OSIZE 040 +#define C_ISIZE 0100 struct DdInfo { unsigned int flags; int64_t count; int64_t osize; +int64_t isize; }; struct DdIo { @@ -4503,6 +4505,20 @@ static int img_dd_osize(const char *arg, return 0; } +static int img_dd_isize(const char *arg, +struct DdIo *in, struct DdIo *out, +struct DdInfo *dd) +{ +dd->isize = cvtnum(arg); + +if (dd->isize < 0) { +error_report("invalid number: '%s'", arg); +return 1; +} + +return 0; +} + static int img_dd(int argc, char **argv) { int ret = 0; @@ -4517,12 +4533,14 @@ static int img_dd(int argc, char **argv) int c, i; const char *out_fmt = "raw"; const char *fmt = NULL; -int64_t size = 0; +int64_t size = 0, readsize = 0; int64_t block_count = 0, out_pos, in_pos; bool force_share = false; struct DdInfo dd = { .flags = 0, .count = 0, +.osize = 0, +.isize = -1, }; struct DdIo in = { .bsz = 512, /* Block size is by default 512 bytes */ @@ -4544,6 +4562,7 @@ static int img_dd(int argc, char **argv) { "of", img_dd_of, C_OF }, { "skip", img_dd_skip, C_SKIP }, { "osize", img_dd_osize, C_OSIZE }, +{ "isize", img_dd_isize, C_ISIZE }, { NULL, NULL, 0 } }; const struct option long_options[] = { @@ -4750,14 +4769,18 @@ static int img_dd(int argc, char **argv) in.buf = g_new(uint8_t, in.bsz); -for (out_pos = 0; in_pos < size; block_count++) { +readsize = (dd.isize > 0) ? dd.isize : size; +for (out_pos = 0; in_pos < readsize; block_count++) { int in_ret, out_ret; -size_t in_bsz = in_pos + in.bsz > size ? size - in_pos : in.bsz; +size_t in_bsz = in_pos + in.bsz > readsize ? readsize - in_pos : in.bsz; if (blk1) { in_ret = blk_pread(blk1, in_pos, in.buf, in_bsz); } else { in_ret = read(STDIN_FILENO, in.buf, in_bsz); if (in_ret == 0) { +if (dd.isize == 0) { +goto out; +} /* early EOF is considered an error */ error_report("Input ended unexpectedly"); ret = -1; -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH 04/31] PVE: [Config] ui/spice: default to pve certificates
From: Wolfgang Bumiller Signed-off-by: Thomas Lamprecht --- ui/spice-core.c | 15 +-- 1 file changed, 9 insertions(+), 6 deletions(-) diff --git a/ui/spice-core.c b/ui/spice-core.c index ecc2ec2c55..ca04965ead 100644 --- a/ui/spice-core.c +++ b/ui/spice-core.c @@ -668,32 +668,35 @@ void qemu_spice_init(void) if (tls_port) { x509_dir = qemu_opt_get(opts, "x509-dir"); -if (!x509_dir) { -x509_dir = "."; -} str = qemu_opt_get(opts, "x509-key-file"); if (str) { x509_key_file = g_strdup(str); -} else { +} else if (x509_dir) { x509_key_file = g_strdup_printf("%s/%s", x509_dir, X509_SERVER_KEY_FILE); +} else { +x509_key_file = g_strdup("/etc/pve/local/pve-ssl.key"); } str = qemu_opt_get(opts, "x509-cert-file"); if (str) { x509_cert_file = g_strdup(str); -} else { +} else if (x509_dir) { x509_cert_file = g_strdup_printf("%s/%s", x509_dir, X509_SERVER_CERT_FILE); +} else { +x509_cert_file = g_strdup("/etc/pve/local/pve-ssl.pem"); } str = qemu_opt_get(opts, "x509-cacert-file"); if (str) { x509_cacert_file = g_strdup(str); -} else { +} else if (x509_dir) { x509_cacert_file = g_strdup_printf("%s/%s", x509_dir, X509_CA_CERT_FILE); +} else { +x509_cacert_file = g_strdup("/etc/pve/pve-root-ca.pem"); } x509_key_password = qemu_opt_get(opts, "x509-key-password"); -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH 09/31] PVE: [Up] glusterfs: allow partial reads
From: Wolfgang Bumiller This should deal with qemu bug #1644754 until upstream decides which way to go. The general direction seems to be away from sector based block APIs and with that in mind, and when comparing to other network block backends (eg. nfs) treating partial reads as errors doesn't seem to make much sense. Signed-off-by: Thomas Lamprecht --- block/gluster.c | 10 +- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/block/gluster.c b/block/gluster.c index bfb57ba098..81fff09c6c 100644 --- a/block/gluster.c +++ b/block/gluster.c @@ -57,6 +57,7 @@ typedef struct GlusterAIOCB { int ret; Coroutine *coroutine; AioContext *aio_context; +bool is_write; } GlusterAIOCB; typedef struct BDRVGlusterState { @@ -763,8 +764,10 @@ static void gluster_finish_aiocb(struct glfs_fd *fd, ssize_t ret, acb->ret = 0; /* Success */ } else if (ret < 0) { acb->ret = -errno; /* Read/Write failed */ +} else if (acb->is_write) { +acb->ret = -EIO; /* Partial write - fail it */ } else { -acb->ret = -EIO; /* Partial read/write - fail it */ +acb->ret = 0; /* Success */ } aio_co_schedule(acb->aio_context, acb->coroutine); @@ -1035,6 +1038,7 @@ static coroutine_fn int qemu_gluster_co_pwrite_zeroes(BlockDriverState *bs, acb.ret = 0; acb.coroutine = qemu_coroutine_self(); acb.aio_context = bdrv_get_aio_context(bs); +acb.is_write = true; ret = glfs_zerofill_async(s->fd, offset, size, gluster_finish_aiocb, ); if (ret < 0) { @@ -1215,9 +1219,11 @@ static coroutine_fn int qemu_gluster_co_rw(BlockDriverState *bs, acb.aio_context = bdrv_get_aio_context(bs); if (write) { +acb.is_write = true; ret = glfs_pwritev_async(s->fd, qiov->iov, qiov->niov, offset, 0, gluster_finish_aiocb, ); } else { +acb.is_write = false; ret = glfs_preadv_async(s->fd, qiov->iov, qiov->niov, offset, 0, gluster_finish_aiocb, ); } @@ -1280,6 +1286,7 @@ static coroutine_fn int qemu_gluster_co_flush_to_disk(BlockDriverState *bs) acb.ret = 0; acb.coroutine = qemu_coroutine_self(); acb.aio_context = bdrv_get_aio_context(bs); +acb.is_write = true; ret = glfs_fsync_async(s->fd, gluster_finish_aiocb, ); if (ret < 0) { @@ -1326,6 +1333,7 @@ static coroutine_fn int qemu_gluster_co_pdiscard(BlockDriverState *bs, acb.ret = 0; acb.coroutine = qemu_coroutine_self(); acb.aio_context = bdrv_get_aio_context(bs); +acb.is_write = true; ret = glfs_discard_async(s->fd, offset, size, gluster_finish_aiocb, ); if (ret < 0) { -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH 01/31] PVE: [Config] block/file: change locking default to off
From: Wolfgang Bumiller 'auto' only checks whether the system generally supports OFD locks but not whether the storage the file resides on supports any locking, causing issues with NFS. Signed-off-by: Wolfgang Bumiller Signed-off-by: Thomas Lamprecht --- block/file-posix.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/block/file-posix.c b/block/file-posix.c index 1b805bd938..44b49265ae 100644 --- a/block/file-posix.c +++ b/block/file-posix.c @@ -449,7 +449,7 @@ static QemuOptsList raw_runtime_opts = { { .name = "locking", .type = QEMU_OPT_STRING, -.help = "file locking mode (on/off/auto, default: auto)", +.help = "file locking mode (on/off/auto, default: off)", }, { .name = "pr-manager", @@ -538,7 +538,7 @@ static int raw_open_common(BlockDriverState *bs, QDict *options, s->use_lock = false; break; case ON_OFF_AUTO_AUTO: -s->use_lock = qemu_has_ofd_lock(); +s->use_lock = false; break; default: abort(); -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH 03/31] PVE: [Config] set the CPU model to kvm64/32 instead of qemu64/32
From: Wolfgang Bumiller Signed-off-by: Thomas Lamprecht --- target/i386/cpu.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/target/i386/cpu.h b/target/i386/cpu.h index cde2a16b94..3e73104bf9 100644 --- a/target/i386/cpu.h +++ b/target/i386/cpu.h @@ -1940,9 +1940,9 @@ uint64_t cpu_get_tsc(CPUX86State *env); #define CPU_RESOLVING_TYPE TYPE_X86_CPU #ifdef TARGET_X86_64 -#define TARGET_DEFAULT_CPU_TYPE X86_CPU_TYPE_NAME("qemu64") +#define TARGET_DEFAULT_CPU_TYPE X86_CPU_TYPE_NAME("kvm64") #else -#define TARGET_DEFAULT_CPU_TYPE X86_CPU_TYPE_NAME("qemu32") +#define TARGET_DEFAULT_CPU_TYPE X86_CPU_TYPE_NAME("kvm32") #endif #define cpu_signal_handler cpu_x86_signal_handler -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH 08/31] PVE: [Up] qmp: add get_link_status
From: Wolfgang Bumiller Signed-off-by: Thomas Lamprecht --- net/net.c | 27 +++ qapi/net.json | 15 +++ qapi/qapi-schema.json | 1 + 3 files changed, 43 insertions(+) diff --git a/net/net.c b/net/net.c index 84aa6d8d00..f548202ec6 100644 --- a/net/net.c +++ b/net/net.c @@ -1349,6 +1349,33 @@ void hmp_info_network(Monitor *mon, const QDict *qdict) } } +int64_t qmp_get_link_status(const char *name, Error **errp) +{ +NetClientState *ncs[MAX_QUEUE_NUM]; +NetClientState *nc; +int queues; +bool ret; + +queues = qemu_find_net_clients_except(name, ncs, + NET_CLIENT_DRIVER__MAX, + MAX_QUEUE_NUM); + +if (queues == 0) { +error_set(errp, ERROR_CLASS_DEVICE_NOT_FOUND, + "Device '%s' not found", name); +return (int64_t) -1; +} + +nc = ncs[0]; +ret = ncs[0]->link_down; + +if (nc->peer->info->type == NET_CLIENT_DRIVER_NIC) { + ret = ncs[0]->peer->link_down; +} + +return (int64_t) ret ? 0 : 1; +} + void colo_notify_filters_event(int event, Error **errp) { NetClientState *nc; diff --git a/qapi/net.json b/qapi/net.json index 335295be50..7f3ea194c8 100644 --- a/qapi/net.json +++ b/qapi/net.json @@ -34,6 +34,21 @@ ## { 'command': 'set_link', 'data': {'name': 'str', 'up': 'bool'} } +## +# @get_link_status: +# +# Get the current link state of the nics or nic. +# +# @name: name of the nic you get the state of +# +# Return: If link is up 1 +# If link is down 0 +# If an error occure an empty string. +# +# Notes: this is an Proxmox VE extension and not offical part of Qemu. +## +{ 'command': 'get_link_status', 'data': {'name': 'str'}, 'returns': 'int'} + ## # @netdev_add: # diff --git a/qapi/qapi-schema.json b/qapi/qapi-schema.json index 9751b11f8f..a449f158e1 100644 --- a/qapi/qapi-schema.json +++ b/qapi/qapi-schema.json @@ -61,6 +61,7 @@ 'query-migrate-cache-size', 'query-tpm-models', 'query-tpm-types', +'get_link_status', 'ringbuf-read' ], 'name-case-whitelist': [ 'ACPISlotType', # DIMM, visible through query-acpi-ospm-status -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH 02/31] PVE: [Config] Adjust network script path to /etc/kvm/
From: Wolfgang Bumiller Signed-off-by: Thomas Lamprecht --- include/net/net.h | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/include/net/net.h b/include/net/net.h index e175ba9677..5b9f099d21 100644 --- a/include/net/net.h +++ b/include/net/net.h @@ -208,8 +208,9 @@ void qmp_netdev_add(QDict *qdict, QObject **ret, Error **errp); int net_hub_id_for_client(NetClientState *nc, int *id); NetClientState *net_hub_port_find(int hub_id); -#define DEFAULT_NETWORK_SCRIPT "/etc/qemu-ifup" -#define DEFAULT_NETWORK_DOWN_SCRIPT "/etc/qemu-ifdown" +#define DEFAULT_NETWORK_SCRIPT "/etc/kvm/kvm-ifup" +#define DEFAULT_NETWORK_DOWN_SCRIPT "/etc/kvm/kvm-ifdown" + #define DEFAULT_BRIDGE_HELPER CONFIG_QEMU_HELPERDIR "/qemu-bridge-helper" #define DEFAULT_BRIDGE_INTERFACE "br0" -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH 00/31 qemu] PVE qemu patches rebased for qemu 4.2.0
Hi all, recent changes in qemu made it necessary to restructure our backup patches. I now use a special block driver which calls the backup_dump callback. I merged all backup related code into the last 3 patches. Alexandre Derumier (2): PVE: [Config] smm_available = false PVE: [Up] qemu-img dd : add -n skip_create Dietmar Maurer (6): PVE: qapi: modify query machines PVE: internal snapshot async PVE: Allow version code in machine type PVE-Backup - add vma code PVE-Backup: add backup-dump block driver PVE-Backup - proxmox backup patches for qemu Stefan Reiter (1): PVE: Acquire aio_context before calling block_job_add_bdrv Thomas Lamprecht (2): PVE: savevm-async: kick AIO wait on block state write PVE: [Compat]: 4.0 used balloon qemu-4-0-config-size false here Wolfgang Bumiller (20): PVE: [Config] block/file: change locking default to off PVE: [Config] Adjust network script path to /etc/kvm/ PVE: [Config] set the CPU model to kvm64/32 instead of qemu64/32 PVE: [Config] ui/spice: default to pve certificates PVE: [Config] glusterfs: no default logfile if daemonized PVE: [Config] rbd: block: rbd: disable rbd_cache_writethrough_until_flush PVE: [Up] qmp: add get_link_status PVE: [Up] glusterfs: allow partial reads PVE: [Up] qemu-img: return success on info without snapshots PVE: [Up] qemu-img dd: add osize and read from/to stdin/stdout PVE: [Up] qemu-img dd: add isize parameter PVE: virtio-balloon: improve query-balloon PVE: qapi: modify spice query PVE: block: add the zeroinit block driver filter PVE: backup: modify job api PVE: Add dummy -id command line parameter PVE: [Config] Revert "target-i386: disable LINT0 after reset" PVE: [Up+Config] file-posix: make locking optiono on create PVE: move snapshot cleanup into bottom half PVE: monitor: disable oob capability Makefile | 3 +- Makefile.objs| 2 + block/Makefile.objs | 2 + block/backup-dump.c | 170 +++ block/backup.c | 26 +- block/file-posix.c | 65 ++- block/gluster.c | 25 +- block/io.c | 8 +- block/rbd.c | 2 + block/replication.c | 2 +- block/zeroinit.c | 204 + blockdev.c | 826 - blockjob.c | 10 + hmp-commands-info.hx | 26 ++ hmp-commands.hx | 63 +++ hw/core/machine-qmp-cmds.c | 12 + hw/core/machine.c| 3 +- hw/i386/pc.c | 2 +- hw/intc/apic_common.c| 9 + hw/virtio/virtio-balloon.c | 33 +- include/block/block_int.h| 31 ++ include/hw/boards.h | 2 + include/migration/snapshot.h | 1 + include/monitor/hmp.h| 8 + include/net/net.h| 5 +- job.c| 5 +- monitor/hmp-cmds.c | 156 ++- monitor/qmp.c| 3 +- net/net.c| 27 ++ qapi/block-core.json | 94 +++- qapi/common.json | 13 + qapi/machine.json| 7 +- qapi/migration.json | 34 ++ qapi/misc.json | 67 ++- qapi/net.json| 15 + qapi/qapi-schema.json| 1 + qapi/ui.json | 3 + qemu-img-cmds.hx | 4 +- qemu-img.c | 231 ++ qemu-options.hx | 16 + savevm-async.c | 465 +++ target/i386/cpu.h| 4 +- ui/spice-core.c | 20 +- vl.c | 33 +- vma-reader.c | 857 +++ vma-writer.c | 771 +++ vma.c| 837 ++ vma.h| 150 ++ 48 files changed, 5188 insertions(+), 165 deletions(-) create mode 100644 block/backup-dump.c create mode 100644 block/zeroinit.c create mode 100644 savevm-async.c create mode 100644 vma-reader.c create mode 100644 vma-writer.c create mode 100644 vma.c create mode 100644 vma.h -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH pve-manager] www/manager6/storage/ContentView.js: consider new ctime value
Signed-off-by: Dietmar Maurer --- www/manager6/storage/ContentView.js | 6 +- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/www/manager6/storage/ContentView.js b/www/manager6/storage/ContentView.js index ffd38fb9..001efc7f 100644 --- a/www/manager6/storage/ContentView.js +++ b/www/manager6/storage/ContentView.js @@ -680,11 +680,15 @@ Ext.define('PVE.storage.ContentView', { let v = record.data.volid; let match = v.match(/(\d{4}_\d{2}_\d{2})-(\d{2}_\d{2}_\d{2})/); if (match) { - let date = match[1].replace(/_/g, '.'); + let date = match[1].replace(/_/g, '-'); let time = match[2].replace(/_/g, ':'); return date + " " + time; } } + if (record.data.ctime) { + let ctime = new Date(record.data.ctime * 1000); + return Ext.Date.format(ctime,'Y-m-d H:i:s'); + } return ''; } }, -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH pve-storage 4/6] PVE/Storage/Plugin.pm: add ctime for all files
Creation time makes sense for other file types also. Signed-off-by: Dietmar Maurer --- PVE/Storage/Plugin.pm | 10 -- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm index bd4bb8c..85af1c8 100644 --- a/PVE/Storage/Plugin.pm +++ b/PVE/Storage/Plugin.pm @@ -3,6 +3,7 @@ package PVE::Storage::Plugin; use strict; use warnings; +use Fcntl ':mode'; use File::chdir; use File::Path; use File::Basename; @@ -904,7 +905,11 @@ my $get_subdir_files = sub { foreach my $fn (<$path/*>) { - next if -d $fn; + my ($dev,$ino,$mode,$nlink,$uid,$gid,$rdev,$size, + $atime,$mtime,$ctime,$blksize,$blocks) + = stat($fn); + + next if S_ISDIR($mode); my $info; @@ -943,7 +948,8 @@ my $get_subdir_files = sub { }; } - $info->{size} = -s $fn // 0; + $info->{size} = $size; + $info->{ctime} //= $ctime; push @$res, $info; } -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH pve-storage 5/6] PVE/Storage/Plugin.pm: return ctime for vm images
Changed file_size_info() to additionally return ctime to avoid another stat() call. Signed-off-by: Dietmar Maurer --- PVE/Storage/Plugin.pm | 18 +- 1 file changed, 13 insertions(+), 5 deletions(-) diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm index 85af1c8..7951c13 100644 --- a/PVE/Storage/Plugin.pm +++ b/PVE/Storage/Plugin.pm @@ -718,8 +718,12 @@ sub free_image { sub file_size_info { my ($filename, $timeout) = @_; -if (-d $filename) { - return wantarray ? (0, 'subvol', 0, undef) : 1; +my @fs = stat($filename); +my $mode = $fs[2]; +my $ctime = $fs[10]; + +if (S_ISDIR($mode)) { + return wantarray ? (0, 'subvol', 0, undef, $ctime) : 1; } my $json = ''; @@ -737,7 +741,7 @@ sub file_size_info { my ($size, $format, $used, $parent) = $info->@{qw(virtual-size format actual-size backing-filename)}; -return wantarray ? ($size, $format, $used, $parent) : $size; +return wantarray ? ($size, $format, $used, $parent, $ctime) : $size; } sub volume_size_info { @@ -872,7 +876,7 @@ sub list_images { next if !$vollist && defined($vmid) && ($owner ne $vmid); - my ($size, $format, $used, $parent) = file_size_info($fn); + my ($size, $format, $used, $parent, $ctime) = file_size_info($fn); next if !($format && defined($size)); my $volid; @@ -888,10 +892,14 @@ sub list_images { next if !$found; } - push @$res, { +my $info = { volid => $volid, format => $format, size => $size, vmid => $owner, used => $used, parent => $parent }; + +$info->{ctime} = $ctime if $ctime; + +push @$res, $info; } return $res; -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH pve-storage 6/6] LVM list_images: return creation time
Signed-off-by: Dietmar Maurer --- PVE/Storage/LVMPlugin.pm | 15 +++ PVE/Storage/LvmThinPlugin.pm | 1 + 2 files changed, 12 insertions(+), 4 deletions(-) diff --git a/PVE/Storage/LVMPlugin.pm b/PVE/Storage/LVMPlugin.pm index f02c110..c9fc191 100644 --- a/PVE/Storage/LVMPlugin.pm +++ b/PVE/Storage/LVMPlugin.pm @@ -148,9 +148,14 @@ sub lvm_vgs { sub lvm_list_volumes { my ($vgname) = @_; -my $cmd = ['/sbin/lvs', '--separator', ':', '--noheadings', '--units', 'b', - '--unbuffered', '--nosuffix', '--options', - 'vg_name,lv_name,lv_size,lv_attr,pool_lv,data_percent,metadata_percent,snap_percent,uuid,tags,metadata_size']; +my $option_list = 'vg_name,lv_name,lv_size,lv_attr,pool_lv,data_percent,metadata_percent,snap_percent,uuid,tags,metadata_size,time'; + +my $cmd = [ + '/sbin/lvs', '--separator', ':', '--noheadings', '--units', 'b', + '--unbuffered', '--nosuffix', + '--config', 'report/time_format="%s"', + '--options', $option_list, +]; push @$cmd, $vgname if $vgname; @@ -160,7 +165,7 @@ sub lvm_list_volumes { $line = trim($line); - my ($vg_name, $lv_name, $lv_size, $lv_attr, $pool_lv, $data_percent, $meta_percent, $snap_percent, $uuid, $tags, $meta_size) = split(':', $line); + my ($vg_name, $lv_name, $lv_size, $lv_attr, $pool_lv, $data_percent, $meta_percent, $snap_percent, $uuid, $tags, $meta_size, $ctime) = split(':', $line); return if !$vg_name; return if !$lv_name; @@ -172,6 +177,7 @@ sub lvm_list_volumes { }; $d->{pool_lv} = $pool_lv if $pool_lv; $d->{tags} = $tags if $tags; + $d->{ctime} = $ctime; if ($lv_type eq 't') { $data_percent ||= 0; @@ -451,6 +457,7 @@ sub list_images { push @$res, { volid => $volid, format => 'raw', size => $info->{lv_size}, vmid => $owner, + ctime => $info->{ctime}, }; } } diff --git a/PVE/Storage/LvmThinPlugin.pm b/PVE/Storage/LvmThinPlugin.pm index 88060c7..d1c5b1f 100644 --- a/PVE/Storage/LvmThinPlugin.pm +++ b/PVE/Storage/LvmThinPlugin.pm @@ -165,6 +165,7 @@ sub list_images { push @$res, { volid => $volid, format => 'raw', size => $info->{lv_size}, vmid => $owner, + ctime => $info->{ctime}, }; } } -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH pve-storage 2/6] PVE/Storage/PBSPlugin.pm - list_volumes: add ctime
Signed-off-by: Dietmar Maurer --- PVE/Storage/PBSPlugin.pm | 9 ++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/PVE/Storage/PBSPlugin.pm b/PVE/Storage/PBSPlugin.pm index fcb1597..2a4c19c 100644 --- a/PVE/Storage/PBSPlugin.pm +++ b/PVE/Storage/PBSPlugin.pm @@ -282,18 +282,21 @@ sub list_volumes { foreach my $item (@$data) { my $btype = $item->{"backup-type"}; my $bid = $item->{"backup-id"}; - my $btime = $item->{"backup-time"}; + my $epoch = $item->{"backup-time"}; my $size = $item->{size} // 1; next if !($btype eq 'vm' || $btype eq 'ct'); next if $bid !~ m/^\d+$/; - $btime = strftime("%FT%TZ", gmtime($btime)); + my $btime = strftime("%FT%TZ", gmtime($epoch)); my $volname = "backup/${btype}/${bid}/${btime}"; my $volid = "$storeid:$volname"; - my $info = { volid => $volid , format => "pbs-$btype", size => $size, content => 'backup', vmid => int($bid) }; + my $info = { + volid => $volid , format => "pbs-$btype", size => $size, + content => 'backup', vmid => int($bid), ctime => $epoch + }; push @$res, $info; } -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH pve-storage 3/6] PVE/API2/Storage/Content.pm - index: add ctime to return schema
Signed-off-by: Dietmar Maurer --- PVE/API2/Storage/Content.pm | 6 ++ 1 file changed, 6 insertions(+) diff --git a/PVE/API2/Storage/Content.pm b/PVE/API2/Storage/Content.pm index ce89ec5..80c9501 100644 --- a/PVE/API2/Storage/Content.pm +++ b/PVE/API2/Storage/Content.pm @@ -81,6 +81,12 @@ __PACKAGE__->register_method ({ renderer => 'bytes', optional => 1, }, + ctime => { + description => "Creation time (Unix epoch). Currently only set for backup volumes.", + type => 'integer', + minimum => 0, + optional => 1, + }, }, }, links => [ { rel => 'child', href => "{volid}" } ], -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH pve-storage 1/6] PVE/Storage/Plugin.pm: add ctime for backup files
Signed-off-by: Dietmar Maurer --- PVE/Storage/Plugin.pm | 6 ++ 1 file changed, 6 insertions(+) diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm index eab73f5..bd4bb8c 100644 --- a/PVE/Storage/Plugin.pm +++ b/PVE/Storage/Plugin.pm @@ -6,6 +6,7 @@ use warnings; use File::chdir; use File::Path; use File::Basename; +use Time::Local qw(timelocal); use PVE::Tools qw(run_command); use PVE::JSONSchema qw(get_standard_option); @@ -924,6 +925,11 @@ my $get_subdir_files = sub { my $format = $2; $info = { volid => "$sid:backup/$1", format => $format }; + if ($fn =~ m!^vzdump\-(?:lxc|qemu)\-(?:[1-9][0-9]{2,8})\-(\d{4})_(\d{2})_(\d{2})\-(\d{2})_(\d{2})_(\d{2})\.${format}$!) { + my $epoch = timelocal($6, $5, $4, $3, $2-1, $1 - 1900); + $info->{ctime} = $epoch; + } + if (defined($vmid) || $fn =~ m!\-([1-9][0-9]{2,8})\-[^/]+\.${format}$!) { $info->{vmid} = $vmid // $1; } -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH pve-storage 1/3] PVE/Storage/Plugin.pm: introduce on_update_hook
We need this to correctly update the password file. --- PVE/API2/Storage/Config.pm | 25 +++-- PVE/Storage/Plugin.pm | 9 + 2 files changed, 32 insertions(+), 2 deletions(-) diff --git a/PVE/API2/Storage/Config.pm b/PVE/API2/Storage/Config.pm index d202784..09724f4 100755 --- a/PVE/API2/Storage/Config.pm +++ b/PVE/API2/Storage/Config.pm @@ -204,12 +204,25 @@ __PACKAGE__->register_method ({ PVE::SectionConfig::assert_if_modified($cfg, $digest); my $scfg = PVE::Storage::storage_config($cfg, $storeid); + my $type = $scfg->{type}; + + my $password; + # always extract pw, else it gets written to the www-data readable scfg + if (my $tmp_pw = extract_param($param, 'password')) { + if (($type eq 'pbs') || ($type eq 'cifs' && $param->{username})) { + $password = $tmp_pw; + } else { + warn "ignore password parameter\n"; + } + } - my $plugin = PVE::Storage::Plugin->lookup($scfg->{type}); + my $plugin = PVE::Storage::Plugin->lookup($type); my $opts = $plugin->check_config($storeid, $param, 0, 1); + my $delete_password = 0; + if ($delete) { - my $options = $plugin->private()->{options}->{$scfg->{type}}; + my $options = $plugin->private()->{options}->{$type}; foreach my $k (PVE::Tools::split_list($delete)) { my $d = $options->{$k} || die "no such option '$k'\n"; die "unable to delete required option '$k'\n" if !$d->{optional}; @@ -218,9 +231,17 @@ __PACKAGE__->register_method ({ if defined($opts->{$k}); delete $scfg->{$k}; + + $delete_password = 1 if $k eq 'password'; } } + if ($delete_password || defined($password)) { + $plugin->on_update_hook($storeid, $opts, password => $password); + } else { + $plugin->on_update_hook($storeid, $opts); + } + for my $k (keys %$opts) { $scfg->{$k} = $opts->{$k}; } diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm index 0c39cbd..fb06c38 100644 --- a/PVE/Storage/Plugin.pm +++ b/PVE/Storage/Plugin.pm @@ -366,6 +366,15 @@ sub on_add_hook { # do nothing by default } +# called during storage configuration update (before the updated storage config got written) +# die to abort the update if there are (grave) problems +# NOTE: runs in a storage config *locked* context +sub on_update_hook { +my ($class, $storeid, $scfg, %param) = @_; + +# do nothing by default +} + # called during deletion of storage (before the new storage config got written) # and if the activate check on addition fails, to cleanup all storage traces # which on_add_hook may have created. -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH pve-storage 2/3] CIFSPlugin.pm: fix crediential handling using new on_update_hook
--- PVE/Storage/CIFSPlugin.pm | 32 ++-- 1 file changed, 26 insertions(+), 6 deletions(-) diff --git a/PVE/Storage/CIFSPlugin.pm b/PVE/Storage/CIFSPlugin.pm index 6115a96..6c8cd72 100644 --- a/PVE/Storage/CIFSPlugin.pm +++ b/PVE/Storage/CIFSPlugin.pm @@ -34,6 +34,15 @@ sub cifs_cred_file_name { return "/etc/pve/priv/${storeid}.cred"; } +sub cifs_delete_credentials { +my ($storeid) = @_; + +my $cred_file = cifs_cred_file_name($storeid); +if (-f $cred_file) { + unlink($cred_file) or warn "removing cifs credientials '$cred_file' failed: $!\n"; +} +} + sub cifs_set_credentials { my ($password, $storeid) = @_; @@ -145,18 +154,29 @@ sub check_config { sub on_add_hook { my ($class, $storeid, $scfg, %param) = @_; -if (my $password = $param{password}) { - cifs_set_credentials($password, $storeid); +if (defined($param{password})) { + cifs_set_credentials($param{password}, $storeid); +} else { + cifs_delete_credentials($storeid); +} +} + +sub on_update_hook { +my ($class, $storeid, $scfg, %param) = @_; + +return if !exists($param{password}); + +if (defined($param{password})) { + cifs_set_credentials($param{password}, $storeid); +} else { + cifs_delete_credentials($storeid); } } sub on_delete_hook { my ($class, $storeid, $scfg) = @_; -my $cred_file = cifs_cred_file_name($storeid); -if (-f $cred_file) { - unlink($cred_file) or warn "removing cifs credientials '$cred_file' failed: $!\n"; -} +cifs_delete_credentials($storeid); } sub status { -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] working on ifupdown2 openvswitch addon
> I'm currently working on a openvswitch addon for ifupdown2. > > I think it'll be finished next week. (It's almost working,need more test and > polish) > > I have also implemented reloading, seem to works fine :) great! ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] Polish translation
see https://pve.proxmox.com/wiki/Translations > On 14 December 2019 17:55 Daniel Koć wrote: > > > Hi, all! > > I find the Proxmox a great piece of software for clusters, so thanks a > lot for it! I currently test a pilot deployment in my work. > > I'd like to make some updates to the Polish translation file. I was a > contributor to the various FOSS projects and I know basics of git. What > else is needed for me to know in the Proxmox to start contributing > translation? > > > -- > "Rzeczy się psują – zęby, spłuczki, kompy, związki, pralki" [Bisz] > > > ___ > pve-devel mailing list > pve-devel@pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH pve-guest-common 1/1] vzdump: added "includename" option
> The main reason for this is to identify backups residing on an old backup > store like an archive. > > > > But I am open. Would you prefer having a manifest included in the archive or > as a separate file on the same storage? The backup archive already contains the full VM config. I thought the manifest should be an extra file on the same storage. ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH pve-guest-common 1/1] vzdump: added "includename" option
> IMHO this is the wrong way to store additional information about > the backup. I am thinking about adding a manifest.json file which may contain such information ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH pve-guest-common 1/1] vzdump: added "includename" option
IMHO this is the wrong way to store additional information about the backup. > On 13 November 2019 15:02 Marco Gabriel wrote: > > > Signed-off-by: Marco Gabriel > --- > PVE/VZDump/Common.pm | 6 ++ > 1 file changed, 6 insertions(+) > > diff --git a/PVE/VZDump/Common.pm b/PVE/VZDump/Common.pm > index 4789a50..0a70b0c 100644 > --- a/PVE/VZDump/Common.pm > +++ b/PVE/VZDump/Common.pm > @@ -213,6 +213,12 @@ my $confdesc = { > type => 'string', > description => 'Backup all known guest systems included in the > specified pool.', > optional => 1, > +}, > +includename => { > + type => 'boolean', > + description => 'Include name of VM in backup file name.', > + optional => 1, > + default => 0, > } > }; > > -- > 2.20.1 > > ___ > pve-devel mailing list > pve-devel@pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH qemu 1/4] pvebackup_co_dump_cb: do not call job->cancel()
Despite of the subject (1/4), this is just a single patch (sorry). > On 27 October 2019 08:24 Dietmar Maurer wrote: > > > The backup loop will automatically abort if we return an error. > --- > blockdev.c | 6 ++ > 1 file changed, 2 insertions(+), 4 deletions(-) > > diff --git a/blockdev.c b/blockdev.c > index 07561b6f96..3343388978 100644 > --- a/blockdev.c > +++ b/blockdev.c > @@ -3254,10 +3254,8 @@ static int coroutine_fn pvebackup_co_dump_cb(void > *opaque, BlockBackend *target, > if (!backup_state.error) { > vma_writer_error_propagate(backup_state.vmaw, > _state.error); > } > -if (di->bs && di->bs->job) { > -job_cancel(>bs->job->job, true); > -} > -break; > +qemu_co_mutex_unlock(_state.backup_mutex); > +return ret; > } else { > backup_state.zero_bytes += zero_bytes; > if (remaining >= VMA_CLUSTER_SIZE) { > -- > 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH qemu 1/4] pvebackup_co_dump_cb: do not call job->cancel()
The backup loop will automatically abort if we return an error. --- blockdev.c | 6 ++ 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/blockdev.c b/blockdev.c index 07561b6f96..3343388978 100644 --- a/blockdev.c +++ b/blockdev.c @@ -3254,10 +3254,8 @@ static int coroutine_fn pvebackup_co_dump_cb(void *opaque, BlockBackend *target, if (!backup_state.error) { vma_writer_error_propagate(backup_state.vmaw, _state.error); } -if (di->bs && di->bs->job) { -job_cancel(>bs->job->job, true); -} -break; +qemu_co_mutex_unlock(_state.backup_mutex); +return ret; } else { backup_state.zero_bytes += zero_bytes; if (remaining >= VMA_CLUSTER_SIZE) { -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH qemu] pvebackup_complete_cb: avoid poll loop if already inside coroutine
--- blockdev.c | 10 -- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/blockdev.c b/blockdev.c index 1bfd85ebc1..5580d36da7 100644 --- a/blockdev.c +++ b/blockdev.c @@ -3170,6 +3170,8 @@ static void coroutine_fn block_on_coroutine_wrapper(void *opaque) static void block_on_coroutine_fn(CoroutineEntry *entry, void *entry_arg) { +assert(!qemu_in_coroutine()); + AioContext *ctx = qemu_get_current_aio_context(); BlockOnCoroutineWrapper wrapper = { .finished = false, @@ -3499,13 +3501,17 @@ static void coroutine_fn pvebackup_co_complete_cb(void *opaque) static void pvebackup_complete_cb(void *opaque, int ret) { -// This always called from the main loop +// This can be called from the main loop, or from a coroutine PVEBackupCompeteCallbackData cb_data = { .di = opaque, .result = ret, }; -block_on_coroutine_fn(pvebackup_co_complete_cb, _data); +if (qemu_in_coroutine()) { +pvebackup_co_complete_cb(_data); +} else { +block_on_coroutine_fn(pvebackup_co_complete_cb, _data); +} } static void coroutine_fn pvebackup_co_cancel(void *opaque) -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH qemu] fix backup job completion
Please wait, seem this is still buggy - need to do more tests ... > On 25 October 2019 11:22 Dietmar Maurer wrote: > > > With recent changes, pvebackup_co_run_next_job cancels the job async, > so we need to run pvebackup_co_cleanup in the completion handler > instead. We call pvebackup_co_run_next as long as there are > jobs in the list. > > Signed-off-by: Dietmar Maurer > --- > blockdev.c | 9 - > 1 file changed, 4 insertions(+), 5 deletions(-) > > diff --git a/blockdev.c b/blockdev.c > index caff370f2e..89b88837cf 100644 > --- a/blockdev.c > +++ b/blockdev.c > @@ -3491,12 +3491,14 @@ static void coroutine_fn > pvebackup_co_complete_cb(void *opaque) > backup_state.di_list = g_list_remove(backup_state.di_list, di); > g_free(di); > > -bool cancel = backup_state.cancel; > +int pending_jobs = g_list_length(backup_state.di_list); > > qemu_co_mutex_unlock(_state.backup_mutex); > > -if (!cancel) { > +if (pending_jobs > 0) { > pvebackup_co_run_next_job(); > +} else { > +pvebackup_co_cleanup(); > } > } > > @@ -3650,9 +3652,6 @@ static void coroutine_fn pvebackup_co_run_next_job(void) > } > } > qemu_co_mutex_unlock(_state.backup_mutex); > - > -// no more jobs, run the cleanup > -pvebackup_co_cleanup(); > } > > typedef struct QmpBackupTask { > -- > 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH qemu] fix backup job completion
With recent changes, pvebackup_co_run_next_job cancels the job async, so we need to run pvebackup_co_cleanup in the completion handler instead. We call pvebackup_co_run_next as long as there are jobs in the list. Signed-off-by: Dietmar Maurer --- blockdev.c | 9 - 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/blockdev.c b/blockdev.c index caff370f2e..89b88837cf 100644 --- a/blockdev.c +++ b/blockdev.c @@ -3491,12 +3491,14 @@ static void coroutine_fn pvebackup_co_complete_cb(void *opaque) backup_state.di_list = g_list_remove(backup_state.di_list, di); g_free(di); -bool cancel = backup_state.cancel; +int pending_jobs = g_list_length(backup_state.di_list); qemu_co_mutex_unlock(_state.backup_mutex); -if (!cancel) { +if (pending_jobs > 0) { pvebackup_co_run_next_job(); +} else { +pvebackup_co_cleanup(); } } @@ -3650,9 +3652,6 @@ static void coroutine_fn pvebackup_co_run_next_job(void) } } qemu_co_mutex_unlock(_state.backup_mutex); - -// no more jobs, run the cleanup -pvebackup_co_cleanup(); } typedef struct QmpBackupTask { -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH qemu 1/3] backup_job_create: pass cluster size for dump
Signed-off-by: Dietmar Maurer --- block/backup.c| 8 +++- block/replication.c | 2 +- blockdev.c| 10 ++ include/block/block_int.h | 4 4 files changed, 18 insertions(+), 6 deletions(-) diff --git a/block/backup.c b/block/backup.c index 5240f71bb5..2ccec79db6 100644 --- a/block/backup.c +++ b/block/backup.c @@ -579,6 +579,7 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs, BlockdevOnError on_target_error, int creation_flags, BackupDumpFunc *dump_cb, + int dump_cb_block_size, BlockCompletionFunc *cb, void *opaque, int pause_count, JobTxn *txn, Error **errp) @@ -649,7 +650,12 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs, goto error; } -cluster_size = backup_calculate_cluster_size(target ? target : bs, errp); +if (target) { +cluster_size = backup_calculate_cluster_size(target, errp); +} else { +cluster_size = dump_cb_block_size; +} + if (cluster_size < 0) { goto error; } diff --git a/block/replication.c b/block/replication.c index e85c62ba9c..a2ad512251 100644 --- a/block/replication.c +++ b/block/replication.c @@ -543,7 +543,7 @@ static void replication_start(ReplicationState *rs, ReplicationMode mode, 0, MIRROR_SYNC_MODE_NONE, NULL, false, BLOCKDEV_ON_ERROR_REPORT, BLOCKDEV_ON_ERROR_REPORT, JOB_INTERNAL, -NULL, +NULL, 0, backup_job_completed, bs, 0, NULL, _err); if (local_err) { error_propagate(errp, local_err); diff --git a/blockdev.c b/blockdev.c index 7e9241cf42..6d16043131 100644 --- a/blockdev.c +++ b/blockdev.c @@ -3524,6 +3524,7 @@ static void coroutine_fn pvebackup_co_start(void *opaque) GList *l; UuidInfo *uuid_info; BlockJob *job; +int dump_cb_block_size = -1; if (!backup_state.backup_mutex_initialized) { qemu_co_mutex_init(_state.backup_mutex); @@ -3611,6 +3612,7 @@ static void coroutine_fn pvebackup_co_start(void *opaque) uuid_generate(uuid); if (format == BACKUP_FORMAT_VMA) { +dump_cb_block_size = VMA_CLUSTER_SIZE; vmaw = vma_writer_create(task->backup_file, uuid, _err); if (!vmaw) { if (local_err) { @@ -3718,8 +3720,8 @@ static void coroutine_fn pvebackup_co_start(void *opaque) l = g_list_next(l); job = backup_job_create(NULL, di->bs, di->target, backup_state.speed, MIRROR_SYNC_MODE_FULL, NULL, false, BLOCKDEV_ON_ERROR_REPORT, BLOCKDEV_ON_ERROR_REPORT, -JOB_DEFAULT, pvebackup_co_dump_cb, pvebackup_complete_cb, di, -1, NULL, _err); +JOB_DEFAULT, pvebackup_co_dump_cb, dump_cb_block_size, +pvebackup_complete_cb, di, 1, NULL, _err); if (!job || local_err != NULL) { error_setg(_state.error, "backup_job_create failed"); break; @@ -4284,7 +4286,7 @@ static BlockJob *do_drive_backup(DriveBackup *backup, JobTxn *txn, job = backup_job_create(backup->job_id, bs, target_bs, backup->speed, backup->sync, bmap, backup->compress, backup->on_source_error, backup->on_target_error, -job_flags, NULL, NULL, NULL, 0, txn, _err); +job_flags, NULL, 0, NULL, NULL, 0, txn, _err); bdrv_unref(target_bs); if (local_err != NULL) { error_propagate(errp, local_err); @@ -4394,7 +4396,7 @@ BlockJob *do_blockdev_backup(BlockdevBackup *backup, JobTxn *txn, job = backup_job_create(backup->job_id, bs, target_bs, backup->speed, backup->sync, bmap, backup->compress, backup->on_source_error, backup->on_target_error, -job_flags, NULL, NULL, NULL, 0, txn, _err); +job_flags, NULL, 0, NULL, NULL, 0, txn, _err); if (local_err != NULL) { error_propagate(errp, local_err); } diff --git a/include/block/block_int.h b/include/block/block_int.h index fd1828cd70..0ac312b359 100644 --- a/include/block/block_int.h +++ b/include/block/block_int.h @@ -1144,6 +1144,9 @@ void mirror_start(const char *job_id, BlockDriverState *bs, * @on_target_error: The action to take upon error writing to the target. * @creation_flags: Flags that control the behavior of the Job lifetime. * See @BlockJobCreateFlags + * @dump_cb: Callback for PVE backup code. Called for each data block when + *
[pve-devel] [PATCH qemu 2/3] avoid calling dump_cb with NULL data pointer for small/last cluster
The last block of a backup may be smaller than cluster_size. Signed-off-by: Dietmar Maurer --- block/backup.c | 7 ++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/block/backup.c b/block/backup.c index 2ccec79db6..cc20d77b9f 100644 --- a/block/backup.c +++ b/block/backup.c @@ -133,7 +133,12 @@ static int coroutine_fn backup_cow_with_bounce_buffer(BackupBlockJob *job, if (qemu_iovec_is_zero()) { if (job->dump_cb) { -ret = job->dump_cb(job->common.job.opaque, job->target, start, qiov.size, NULL); +if (qiov.size == job->cluster_size) { +// Note: pass NULL to indicate that we want to write [0u8; cluster_size] +ret = job->dump_cb(job->common.job.opaque, job->target, start, qiov.size, NULL); +} else { +ret = job->dump_cb(job->common.job.opaque, job->target, start, qiov.size, *bounce_buffer); +} } else { ret = blk_co_pwrite_zeroes(job->target, start, qiov.size, write_flags | BDRV_REQ_MAY_UNMAP); -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH qemu 3/3] rename config_to_vma into pvebackup_co_add_config
- mark it with coroutine_fn - add an additional parameter 'name' - return -1 on error (instead of 1) - code cleanup Signed-off-by: Dietmar Maurer --- blockdev.c | 40 ++-- 1 file changed, 26 insertions(+), 14 deletions(-) diff --git a/blockdev.c b/blockdev.c index 6d16043131..786921da0a 100644 --- a/blockdev.c +++ b/blockdev.c @@ -3416,10 +3416,16 @@ void qmp_backup_cancel(Error **errp) block_on_coroutine_fn(pvebackup_co_cancel, NULL); } -static int config_to_vma(const char *file, BackupFormat format, - const char *backup_dir, VmaWriter *vmaw, - Error **errp) +static int coroutine_fn pvebackup_co_add_config( +const char *file, +const char *name, +BackupFormat format, +const char *backup_dir, +VmaWriter *vmaw, +Error **errp) { +int res = 0; + char *cdata = NULL; gsize clen = 0; GError *err = NULL; @@ -3429,28 +3435,30 @@ static int config_to_vma(const char *file, BackupFormat format, } char *basename = g_path_get_basename(file); +if (name == NULL) name = basename; if (format == BACKUP_FORMAT_VMA) { -if (vma_writer_add_config(vmaw, basename, cdata, clen) != 0) { +if (vma_writer_add_config(vmaw, name, cdata, clen) != 0) { error_setg(errp, "unable to add %s config data to vma archive", file); -g_free(cdata); -g_free(basename); -return 1; +goto err; } } else if (format == BACKUP_FORMAT_DIR) { char config_path[PATH_MAX]; -snprintf(config_path, PATH_MAX, "%s/%s", backup_dir, basename); +snprintf(config_path, PATH_MAX, "%s/%s", backup_dir, name); if (!g_file_set_contents(config_path, cdata, clen, )) { error_setg(errp, "unable to write config file '%s'", config_path); -g_free(cdata); -g_free(basename); -return 1; +goto err; } } + out: g_free(basename); g_free(cdata); -return 0; +return res; + + err: +res = -1; +goto out; } bool job_should_pause(Job *job); @@ -3526,6 +3534,9 @@ static void coroutine_fn pvebackup_co_start(void *opaque) BlockJob *job; int dump_cb_block_size = -1; +const char *config_name = "qemu-server.conf"; +const char *firewall_name = "qemu-server.fw"; + if (!backup_state.backup_mutex_initialized) { qemu_co_mutex_init(_state.backup_mutex); backup_state.backup_mutex_initialized = true; @@ -3670,16 +3681,17 @@ static void coroutine_fn pvebackup_co_start(void *opaque) goto err; } + /* add configuration file to archive */ if (task->has_config_file) { -if (config_to_vma(task->config_file, format, backup_dir, vmaw, task->errp) != 0) { +if (pvebackup_co_add_config(task->config_file, config_name, format, backup_dir, vmaw, task->errp) != 0) { goto err; } } /* add firewall file to archive */ if (task->has_firewall_file) { -if (config_to_vma(task->firewall_file, format, backup_dir, vmaw, task->errp) != 0) { +if (pvebackup_co_add_config(task->firewall_file, firewall_name, format, backup_dir, vmaw, task->errp) != 0) { goto err; } } -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH pve-qemu] add patch to fix #1071
Signed-off-by: Dietmar Maurer --- ...-vma-writer.c-use-correct-AioContext.patch | 52 +++ debian/patches/series | 1 + 2 files changed, 53 insertions(+) create mode 100644 debian/patches/pve/0032-PVE-bug-fix-1071-vma-writer.c-use-correct-AioContext.patch diff --git a/debian/patches/pve/0032-PVE-bug-fix-1071-vma-writer.c-use-correct-AioContext.patch b/debian/patches/pve/0032-PVE-bug-fix-1071-vma-writer.c-use-correct-AioContext.patch new file mode 100644 index 000..addca82 --- /dev/null +++ b/debian/patches/pve/0032-PVE-bug-fix-1071-vma-writer.c-use-correct-AioContext.patch @@ -0,0 +1,52 @@ +From Mon Sep 17 00:00:00 2001 +From: Dietmar Maurer +Date: Mon, 21 Oct 2019 11:51:57 +0200 +Subject: [PATCH] PVE bug fix #1071 - vma-writer.c: use correct AioContext + +Signed-off-by: Dietmar Maurer +--- + vma-writer.c | 16 + 1 file changed, 8 insertions(+), 8 deletions(-) + +diff --git a/vma-writer.c b/vma-writer.c +index fd9567634d..b163fa2d3a 100644 +--- a/vma-writer.c b/vma-writer.c +@@ -199,12 +199,14 @@ int vma_writer_register_stream(VmaWriter *vmaw, const char *devname, + return n; + } + +-static void vma_co_continue_write(void *opaque) ++static void coroutine_fn yield_until_fd_writable(int fd) + { +-VmaWriter *vmaw = opaque; +- +-DPRINTF("vma_co_continue_write\n"); +-qemu_coroutine_enter(vmaw->co_writer); ++assert(qemu_in_coroutine()); ++AioContext *ctx = qemu_get_current_aio_context(); ++aio_set_fd_handler(ctx, fd, false, NULL, (IOHandler *)qemu_coroutine_enter, ++ NULL, qemu_coroutine_self()); ++qemu_coroutine_yield(); ++aio_set_fd_handler(ctx, fd, false, NULL, NULL, NULL, NULL); + } + + static ssize_t coroutine_fn +@@ -224,14 +226,12 @@ vma_queue_write(VmaWriter *vmaw, const void *buf, size_t bytes) + vmaw->co_writer = qemu_coroutine_self(); + + while (done < bytes) { +-aio_set_fd_handler(qemu_get_aio_context(), vmaw->fd, false, NULL, vma_co_continue_write, NULL, vmaw); +-qemu_coroutine_yield(); +-aio_set_fd_handler(qemu_get_aio_context(), vmaw->fd, false, NULL, NULL, NULL, NULL); + if (vmaw->status < 0) { + DPRINTF("vma_queue_write detected canceled backup\n"); + done = -1; + break; + } ++yield_until_fd_writable(vmaw->fd); + ret = write(vmaw->fd, buf + done, bytes - done); + if (ret > 0) { + done += ret; +-- +2.20.1 diff --git a/debian/patches/series b/debian/patches/series index ce96303..7097343 100644 --- a/debian/patches/series +++ b/debian/patches/series @@ -33,3 +33,4 @@ pve/0028-docs-recommend-use-of-md-clear-feature-on-all-Intel-.patch pve/0029-PVE-savevm-async-kick-AIO-wait-on-block-state-write.patch pve/0030-PVE-move-snapshot-cleanup-into-bottom-half.patch pve/0031-PVE-monitor-disable-oob-capability.patch +pve/0032-PVE-bug-fix-1071-vma-writer.c-use-correct-AioContext.patch -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] scsi-hd vs scsi-generic with iSCSI
also see: https://pve.proxmox.com/pipermail/pve-devel/2012-August/003347.html > On 9 October 2019 17:39 Thomas Lamprecht wrote: > > > On 10/8/19 12:54 PM, Daniel Berteaud wrote: > > - Le 8 Oct 19, à 12:28, Thomas Lamprecht t.lampre...@proxmox.com a > > écrit : > > > >> > >> Thanks for the nice write up and clear reproducer! > >> > >> It seems that if we cannot use the same backend for all disks we need to > >> die when a disk move to a storage backend is request, and that move would > >> need to change the scsi "backend". > >> As I'd not like to die it would be better to see if there's still the need > >> for different backends. > > > > Dying wouldn't be very nice indeed (I need to be able to move disks between > > NFS and ZFS over iSCSI on a regular basis) > > > > If scsi-hd was always selected, there would be no issue. I've patched my > > QemuServer.pm to do that for now. > > Not sure if scsi-generic/scsi-block has any advantages, but I couldn't > > measure performance diff in my case. unmap is also passed correctly with > > scsi-hd. IMHO, unless there are strong values with them (which I'am unaware > > off, but I couldn't find any documentation about all those backends), we > > should always use scsi-hd, as it's working with all storage types and > > allows live disk move from any storage type to any other, including the > > issue I have specific to ZFS over iSCSI (guest I/O error during live move > > from ZFS over iSCSI to something else) > > I would actually really like to change this to scsi-hd, but we need to be sure > it's OK for all possible supported setups.. > > So I tried to investigate a bit how it came to the use of scsi-generic, I came > to a commit[0] from Alexandre (CCd) which adds support for access with > libiscsi. > > Maybe he knows why the -generic was used and not the -hd one? > > [0]: > https://git.proxmox.com/?p=qemu-server.git;a=commitdiff;h=d454d040338a6216c8d3e5cc9623d6223476cb5a > > cheers, > Thomas > > > ___ > pve-devel mailing list > pve-devel@pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] cpu type ppc/s390
> I would like ask if it is possible to add other CPU types to the proxmox > admin interface > Such as PowerPC (ppc64le/be) and s390x There are no plans to add ppc or s390. ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] scsi-hd vs scsi-generic with iSCSI
> After some digging, I found that when using scsi-hd (instead of the default > scsi-generic), everything is working correctly. What exactly is the problem with scsi-generic? Maybe we can fix it? ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] applied: [RFC PATCH common] cli: prettify tables even more
ok, sorry for the noise ... > On 5 September 2019 07:08 Thomas Lamprecht wrote: > > > On 04.09.19 19:36, Dietmar Maurer wrote: > > What about my doubts? AFAIK nobody answered my mail? (or maybe i missed a > > mail) > > > > Wolfgang did answer: > > > On Wed, Aug 21, 2019 at 09:35:56PM +0200, Dietmar Maurer wrote: > >> Are your sure common terminals support those characters? > >> > >> They did not when I tested ... > > > > Curious, when using utf-8 encoding for those characters I wouldn't see > > how these characters would be any more special than any other? > > In any case, tested urxvt, terminator, xfce4-terminal, gnome-terminal, > > lxterminal. > > Also note that all those symbols already existed in the extended ascii set, > > codes 181, 198 and 216, and yes, those do show up correctly in dosbox > > > And thus I saw no issue left.. > > > > >> On 4 September 2019 16:06 Thomas Lamprecht wrote: > >> > >> > >> On 21.08.19 14:33, Wolfgang Bumiller wrote: > >>> Separate the header with a double line. > >>> > >>> Signed-off-by: Wolfgang Bumiller > >>> --- > >>> src/PVE/CLIFormatter.pm | 18 +- > >>> 1 file changed, 17 insertions(+), 1 deletion(-) > >>> > >> > >> applied, thanks! ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] applied: [RFC PATCH common] cli: prettify tables even more
What about my doubts? AFAIK nobody answered my mail? (or maybe i missed a mail) > On 4 September 2019 16:06 Thomas Lamprecht wrote: > > > On 21.08.19 14:33, Wolfgang Bumiller wrote: > > Separate the header with a double line. > > > > Signed-off-by: Wolfgang Bumiller > > --- > > src/PVE/CLIFormatter.pm | 18 +- > > 1 file changed, 17 insertions(+), 1 deletion(-) > > > > applied, thanks! > > ___ > pve-devel mailing list > pve-devel@pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [RFC PATCH common] cli: prettify tables even more
Are your sure common terminals support those characters? They did not when I tested ... > On 21 August 2019 14:33 Wolfgang Bumiller wrote: > > > Separate the header with a double line. > > Signed-off-by: Wolfgang Bumiller > --- > src/PVE/CLIFormatter.pm | 18 +- > 1 file changed, 17 insertions(+), 1 deletion(-) > > diff --git a/src/PVE/CLIFormatter.pm b/src/PVE/CLIFormatter.pm > index 84dbed1..0e9cbe6 100644 > --- a/src/PVE/CLIFormatter.pm > +++ b/src/PVE/CLIFormatter.pm > @@ -186,6 +186,7 @@ sub print_text_table { > my $borderstring_m = ''; > my $borderstring_b = ''; > my $borderstring_t = ''; > +my $borderstring_h = ''; > my $formatstring = ''; > > my $column_count = scalar(@$props_to_print); > @@ -255,41 +256,49 @@ sub print_text_table { > if ($utf8) { > $formatstring .= "│ %$alignstr${cutoff}s │"; > $borderstring_t .= "┌─" . ('─' x $cutoff) . "─┐"; > + $borderstring_h .= "╞═" . ('═' x $cutoff) . '═╡'; > $borderstring_m .= "├─" . ('─' x $cutoff) . "─┤"; > $borderstring_b .= "└─" . ('─' x $cutoff) . "─┘"; > } else { > $formatstring .= "| %$alignstr${cutoff}s |"; > $borderstring_m .= "+-" . ('-' x $cutoff) . "-+"; > + $borderstring_h .= "+=" . ('=' x $cutoff) . '='; > } > } elsif ($i == 0) { > if ($utf8) { > $formatstring .= "│ %$alignstr${cutoff}s "; > $borderstring_t .= "┌─" . ('─' x $cutoff) . '─'; > + $borderstring_h .= "╞═" . ('═' x $cutoff) . '═'; > $borderstring_m .= "├─" . ('─' x $cutoff) . '─'; > $borderstring_b .= "└─" . ('─' x $cutoff) . '─'; > } else { > $formatstring .= "| %$alignstr${cutoff}s "; > $borderstring_m .= "+-" . ('-' x $cutoff) . '-'; > + $borderstring_h .= "+=" . ('=' x $cutoff) . '='; > } > } elsif ($i == ($column_count - 1)) { > if ($utf8) { > $formatstring .= "│ %$alignstr${cutoff}s │"; > $borderstring_t .= "┬─" . ('─' x $cutoff) . "─┐"; > + $borderstring_h .= "╪═" . ('═' x $cutoff) . '═╡'; > $borderstring_m .= "┼─" . ('─' x $cutoff) . "─┤"; > $borderstring_b .= "┴─" . ('─' x $cutoff) . "─┘"; > } else { > $formatstring .= "| %$alignstr${cutoff}s |"; > $borderstring_m .= "+-" . ('-' x $cutoff) . "-+"; > + $borderstring_h .= "+=" . ('=' x $cutoff) . "=+"; > } > } else { > if ($utf8) { > $formatstring .= "│ %$alignstr${cutoff}s "; > $borderstring_t .= "┬─" . ('─' x $cutoff) . '─'; > + $borderstring_h .= "╪═" . ('═' x $cutoff) . '═'; > $borderstring_m .= "┼─" . ('─' x $cutoff) . '─'; > $borderstring_b .= "┴─" . ('─' x $cutoff) . '─'; > } else { > $formatstring .= "| %$alignstr${cutoff}s "; > $borderstring_m .= "+-" . ('-' x $cutoff) . '-'; > + $borderstring_h .= "+=" . ('=' x $cutoff) . '='; > } > } > } else { > @@ -313,15 +322,22 @@ sub print_text_table { > > $writeln->($borderstring_t) if $border; > > +my $borderstring_sep; > if ($header) { > my $text = sprintf $formatstring, map { $colopts->{$_}->{title} } > @$props_to_print; > $writeln->($text); > + $borderstring_sep = $borderstring_h; > +} else { > + $borderstring_sep = $borderstring_m; > } > > for (my $i = 0; $i < scalar(@$tabledata); $i++) { > my $coldata = $tabledata->[$i]; > > - $writeln->($borderstring_m) if $border && ($i != 0 || $header); > + if ($border && ($i != 0 || $header)) { > + $writeln->($borderstring_sep); > + $borderstring_sep = $borderstring_m; > + } > > for (my $i = 0; $i < $coldata->{height}; $i++) { > > -- > 2.20.1 > > > ___ > pve-devel mailing list > pve-devel@pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH pve-storage v3] storage plugin: new list_volumes plugin method
This cleanup improve code reuse, and allows plugins to override list_volumes. --- Changes in v3: - keep template_list compatible with previous behavior - rename $template_list into $get_subdir_files Changes in v2: - remove debug statements - move implementaion to Plugin.pm for max. compatibility with old code - cleanup regex (use i flag) - bump APIVER an d APIAGE - sort result inside volume_list (for all plugins) - only list supported/enabled content PVE/Storage.pm | 175 +-- PVE/Storage/DirPlugin.pm | 4 +- PVE/Storage/Plugin.pm| 77 + 3 files changed, 117 insertions(+), 139 deletions(-) diff --git a/PVE/Storage.pm b/PVE/Storage.pm index 5925c69..67a9a29 100755 --- a/PVE/Storage.pm +++ b/PVE/Storage.pm @@ -36,11 +36,11 @@ use PVE::Storage::ZFSPlugin; use PVE::Storage::DRBDPlugin; # Storage API version. Icrement it on changes in storage API interface. -use constant APIVER => 2; +use constant APIVER => 3; # Age is the number of versions we're backward compatible with. # This is like having 'current=APIVER' and age='APIAGE' in libtool, # see https://www.gnu.org/software/libtool/manual/html_node/Libtool-versioning.html -use constant APIAGE => 1; +use constant APIAGE => 2; # load standard plugins PVE::Storage::DirPlugin->register(); @@ -769,116 +769,6 @@ sub vdisk_free { $rpcenv->fork_worker('imgdel', undef, $authuser, $cleanup_worker); } -# lists all files in the snippets directory -sub snippets_list { -my ($cfg, $storeid) = @_; - -my $ids = $cfg->{ids}; - -storage_check_enabled($cfg, $storeid) if ($storeid); - -my $res = {}; - -foreach my $sid (keys %$ids) { - next if $storeid && $storeid ne $sid; - next if !storage_check_enabled($cfg, $sid, undef, 1); - - my $scfg = $ids->{$sid}; - next if !$scfg->{content}->{snippets}; - - activate_storage($cfg, $sid); - - if ($scfg->{path}) { - my $plugin = PVE::Storage::Plugin->lookup($scfg->{type}); - my $path = $plugin->get_subdir($scfg, 'snippets'); - - foreach my $fn (<$path/*>) { - next if -d $fn; - - push @{$res->{$sid}}, { - volid => "$sid:snippets/". basename($fn), - format => 'snippet', - size => -s $fn // 0, - }; - } - } - - if ($res->{$sid}) { - @{$res->{$sid}} = sort {$a->{volid} cmp $b->{volid} } @{$res->{$sid}}; - } -} - -return $res; -} - -#list iso or openvz template ($tt = ) -sub template_list { -my ($cfg, $storeid, $tt) = @_; - -die "unknown template type '$tt'\n" - if !($tt eq 'iso' || $tt eq 'vztmpl' || $tt eq 'backup'); - -my $ids = $cfg->{ids}; - -storage_check_enabled($cfg, $storeid) if ($storeid); - -my $res = {}; - -# query the storage - -foreach my $sid (keys %$ids) { - next if $storeid && $storeid ne $sid; - - my $scfg = $ids->{$sid}; - my $type = $scfg->{type}; - - next if !storage_check_enabled($cfg, $sid, undef, 1); - - next if $tt eq 'iso' && !$scfg->{content}->{iso}; - next if $tt eq 'vztmpl' && !$scfg->{content}->{vztmpl}; - next if $tt eq 'backup' && !$scfg->{content}->{backup}; - - activate_storage($cfg, $sid); - - if ($scfg->{path}) { - my $plugin = PVE::Storage::Plugin->lookup($scfg->{type}); - - my $path = $plugin->get_subdir($scfg, $tt); - - foreach my $fn (<$path/*>) { - - my $info; - - if ($tt eq 'iso') { - next if $fn !~ m!/([^/]+\.[Ii][Ss][Oo])$!; - - $info = { volid => "$sid:iso/$1", format => 'iso' }; - - } elsif ($tt eq 'vztmpl') { - next if $fn !~ m!/([^/]+\.tar\.([gx]z))$!; - - $info = { volid => "$sid:vztmpl/$1", format => "t$2" }; - - } elsif ($tt eq 'backup') { - next if $fn !~ m!/([^/]+\.(tar|tar\.gz|tar\.lzo|tgz|vma|vma\.gz|vma\.lzo))$!; - - $info = { volid => "$sid:backup/$1", format => $2 }; - } - - $info->{size} = -s $fn // 0; - - push @{$res->{$sid}}, $info; - } - - } - - @{$res->{$sid}} = sort {lc($a->{volid}) cmp lc ($b->{volid}) } @{$res->{$sid}} if $res->{$sid}; -} - -return $res; -} - - sub vdisk_list { my ($cfg, $storeid, $vmid, $vollist) = @_; @@ -923,6 +813,35 @@ sub vdisk_list { return $res; } +sub template_list { +my ($cfg, $storeid, $tt) = @_; + + die "unknown template type '$tt'\n" + if !($tt eq 'iso' || $tt eq 'vztmpl' || $tt eq 'backup' || $tt eq 'snippets'); + +my $ids = $cfg->{ids}; + +storage_check_enabled($cfg, $storeid) if ($storeid); + +my $res = {}; + +# query the storage +foreach my $sid (keys %$ids) { + next if $storeid && $storeid ne $sid; + + my $scfg =
[pve-devel] [PATCH pve-storage v2] storage plugin: new list_volumes plugin method
This cleanup improve code reuse, and allows plugins to override list_volumes. Note: This changes the template_list return value into an array. --- Changes in v2: - remove debug statements - move implementaion to Plugin.pm for max. compatibility with old code - cleanup regex (use i flag) - bump APIVER an d APIAGE - sort result inside volume_list (for all plugins) - only list supported/enabled content We need to adopt the template_list call in - pveam, - PVE/QemuServer.pm 7316f - PVE/LXC.pm 1832f PVE/Storage.pm | 155 - PVE/Storage/DirPlugin.pm | 4 +- PVE/Storage/Plugin.pm | 77 test/run_test_zfspoolplugin.pl | 2 +- 4 files changed, 98 insertions(+), 140 deletions(-) diff --git a/PVE/Storage.pm b/PVE/Storage.pm index 5925c69..c438374 100755 --- a/PVE/Storage.pm +++ b/PVE/Storage.pm @@ -36,11 +36,11 @@ use PVE::Storage::ZFSPlugin; use PVE::Storage::DRBDPlugin; # Storage API version. Icrement it on changes in storage API interface. -use constant APIVER => 2; +use constant APIVER => 3; # Age is the number of versions we're backward compatible with. # This is like having 'current=APIVER' and age='APIAGE' in libtool, # see https://www.gnu.org/software/libtool/manual/html_node/Libtool-versioning.html -use constant APIAGE => 1; +use constant APIAGE => 2; # load standard plugins PVE::Storage::DirPlugin->register(); @@ -769,116 +769,6 @@ sub vdisk_free { $rpcenv->fork_worker('imgdel', undef, $authuser, $cleanup_worker); } -# lists all files in the snippets directory -sub snippets_list { -my ($cfg, $storeid) = @_; - -my $ids = $cfg->{ids}; - -storage_check_enabled($cfg, $storeid) if ($storeid); - -my $res = {}; - -foreach my $sid (keys %$ids) { - next if $storeid && $storeid ne $sid; - next if !storage_check_enabled($cfg, $sid, undef, 1); - - my $scfg = $ids->{$sid}; - next if !$scfg->{content}->{snippets}; - - activate_storage($cfg, $sid); - - if ($scfg->{path}) { - my $plugin = PVE::Storage::Plugin->lookup($scfg->{type}); - my $path = $plugin->get_subdir($scfg, 'snippets'); - - foreach my $fn (<$path/*>) { - next if -d $fn; - - push @{$res->{$sid}}, { - volid => "$sid:snippets/". basename($fn), - format => 'snippet', - size => -s $fn // 0, - }; - } - } - - if ($res->{$sid}) { - @{$res->{$sid}} = sort {$a->{volid} cmp $b->{volid} } @{$res->{$sid}}; - } -} - -return $res; -} - -#list iso or openvz template ($tt = ) -sub template_list { -my ($cfg, $storeid, $tt) = @_; - -die "unknown template type '$tt'\n" - if !($tt eq 'iso' || $tt eq 'vztmpl' || $tt eq 'backup'); - -my $ids = $cfg->{ids}; - -storage_check_enabled($cfg, $storeid) if ($storeid); - -my $res = {}; - -# query the storage - -foreach my $sid (keys %$ids) { - next if $storeid && $storeid ne $sid; - - my $scfg = $ids->{$sid}; - my $type = $scfg->{type}; - - next if !storage_check_enabled($cfg, $sid, undef, 1); - - next if $tt eq 'iso' && !$scfg->{content}->{iso}; - next if $tt eq 'vztmpl' && !$scfg->{content}->{vztmpl}; - next if $tt eq 'backup' && !$scfg->{content}->{backup}; - - activate_storage($cfg, $sid); - - if ($scfg->{path}) { - my $plugin = PVE::Storage::Plugin->lookup($scfg->{type}); - - my $path = $plugin->get_subdir($scfg, $tt); - - foreach my $fn (<$path/*>) { - - my $info; - - if ($tt eq 'iso') { - next if $fn !~ m!/([^/]+\.[Ii][Ss][Oo])$!; - - $info = { volid => "$sid:iso/$1", format => 'iso' }; - - } elsif ($tt eq 'vztmpl') { - next if $fn !~ m!/([^/]+\.tar\.([gx]z))$!; - - $info = { volid => "$sid:vztmpl/$1", format => "t$2" }; - - } elsif ($tt eq 'backup') { - next if $fn !~ m!/([^/]+\.(tar|tar\.gz|tar\.lzo|tgz|vma|vma\.gz|vma\.lzo))$!; - - $info = { volid => "$sid:backup/$1", format => $2 }; - } - - $info->{size} = -s $fn // 0; - - push @{$res->{$sid}}, $info; - } - - } - - @{$res->{$sid}} = sort {lc($a->{volid}) cmp lc ($b->{volid}) } @{$res->{$sid}} if $res->{$sid}; -} - -return $res; -} - - sub vdisk_list { my ($cfg, $storeid, $vmid, $vollist) = @_; @@ -923,6 +813,15 @@ sub vdisk_list { return $res; } +sub template_list { +my ($cfg, $storeid, $tt) = @_; + +die "unknown template type '$tt'\n" + if !($tt eq 'iso' || $tt eq 'vztmpl' || $tt eq 'backup' || $tt eq 'snippets'); + +return volume_list($cfg, $storeid, undef, $tt); +} + sub volume_list { my ($cfg, $storeid, $vmid, $content) = @_; @@ -932,33 +831,15 @@
[pve-devel] [PATCH pve-storage] PVE/Storage/Plugin.pm: new list_volumes plugin method
This cleanup improve code reuse, and allows plugins to override list_volumes. Note: This changes the template_list return value into an array. --- PVE/Storage.pm | 147 +++-- PVE/Storage/DirPlugin.pm | 84 ++- PVE/Storage/Plugin.pm | 23 ++ test/run_test_zfspoolplugin.pl | 2 +- 4 files changed, 118 insertions(+), 138 deletions(-) diff --git a/PVE/Storage.pm b/PVE/Storage.pm index 5925c69..18ce8b6 100755 --- a/PVE/Storage.pm +++ b/PVE/Storage.pm @@ -769,116 +769,6 @@ sub vdisk_free { $rpcenv->fork_worker('imgdel', undef, $authuser, $cleanup_worker); } -# lists all files in the snippets directory -sub snippets_list { -my ($cfg, $storeid) = @_; - -my $ids = $cfg->{ids}; - -storage_check_enabled($cfg, $storeid) if ($storeid); - -my $res = {}; - -foreach my $sid (keys %$ids) { - next if $storeid && $storeid ne $sid; - next if !storage_check_enabled($cfg, $sid, undef, 1); - - my $scfg = $ids->{$sid}; - next if !$scfg->{content}->{snippets}; - - activate_storage($cfg, $sid); - - if ($scfg->{path}) { - my $plugin = PVE::Storage::Plugin->lookup($scfg->{type}); - my $path = $plugin->get_subdir($scfg, 'snippets'); - - foreach my $fn (<$path/*>) { - next if -d $fn; - - push @{$res->{$sid}}, { - volid => "$sid:snippets/". basename($fn), - format => 'snippet', - size => -s $fn // 0, - }; - } - } - - if ($res->{$sid}) { - @{$res->{$sid}} = sort {$a->{volid} cmp $b->{volid} } @{$res->{$sid}}; - } -} - -return $res; -} - -#list iso or openvz template ($tt = ) -sub template_list { -my ($cfg, $storeid, $tt) = @_; - -die "unknown template type '$tt'\n" - if !($tt eq 'iso' || $tt eq 'vztmpl' || $tt eq 'backup'); - -my $ids = $cfg->{ids}; - -storage_check_enabled($cfg, $storeid) if ($storeid); - -my $res = {}; - -# query the storage - -foreach my $sid (keys %$ids) { - next if $storeid && $storeid ne $sid; - - my $scfg = $ids->{$sid}; - my $type = $scfg->{type}; - - next if !storage_check_enabled($cfg, $sid, undef, 1); - - next if $tt eq 'iso' && !$scfg->{content}->{iso}; - next if $tt eq 'vztmpl' && !$scfg->{content}->{vztmpl}; - next if $tt eq 'backup' && !$scfg->{content}->{backup}; - - activate_storage($cfg, $sid); - - if ($scfg->{path}) { - my $plugin = PVE::Storage::Plugin->lookup($scfg->{type}); - - my $path = $plugin->get_subdir($scfg, $tt); - - foreach my $fn (<$path/*>) { - - my $info; - - if ($tt eq 'iso') { - next if $fn !~ m!/([^/]+\.[Ii][Ss][Oo])$!; - - $info = { volid => "$sid:iso/$1", format => 'iso' }; - - } elsif ($tt eq 'vztmpl') { - next if $fn !~ m!/([^/]+\.tar\.([gx]z))$!; - - $info = { volid => "$sid:vztmpl/$1", format => "t$2" }; - - } elsif ($tt eq 'backup') { - next if $fn !~ m!/([^/]+\.(tar|tar\.gz|tar\.lzo|tgz|vma|vma\.gz|vma\.lzo))$!; - - $info = { volid => "$sid:backup/$1", format => $2 }; - } - - $info->{size} = -s $fn // 0; - - push @{$res->{$sid}}, $info; - } - - } - - @{$res->{$sid}} = sort {lc($a->{volid}) cmp lc ($b->{volid}) } @{$res->{$sid}} if $res->{$sid}; -} - -return $res; -} - - sub vdisk_list { my ($cfg, $storeid, $vmid, $vollist) = @_; @@ -923,6 +813,15 @@ sub vdisk_list { return $res; } +sub template_list { +my ($cfg, $storeid, $tt) = @_; + +die "unknown template type '$tt'\n" + if !($tt eq 'iso' || $tt eq 'vztmpl' || $tt eq 'backup' || $tt eq 'snippets'); + +return volume_list($cfg, $storeid, undef, $tt); +} + sub volume_list { my ($cfg, $storeid, $vmid, $content) = @_; @@ -932,33 +831,11 @@ sub volume_list { my $scfg = PVE::Storage::storage_config($cfg, $storeid); -my $res = []; -foreach my $ct (@$cts) { - my $data; - if ($ct eq 'images') { - $data = vdisk_list($cfg, $storeid, $vmid); - } elsif ($ct eq 'iso' && !defined($vmid)) { - $data = template_list($cfg, $storeid, 'iso'); - } elsif ($ct eq 'vztmpl'&& !defined($vmid)) { - $data = template_list ($cfg, $storeid, 'vztmpl'); - } elsif ($ct eq 'backup') { - $data = template_list ($cfg, $storeid, 'backup'); - foreach my $item (@{$data->{$storeid}}) { - if (defined($vmid)) { - @{$data->{$storeid}} = grep { $_->{volid} =~ m/\S+-$vmid-\S+/ } @{$data->{$storeid}}; - } - } - } elsif ($ct eq 'snippets') { - $data = snippets_list($cfg, $storeid); - } +my
Re: [pve-devel] pve-firewall : log for default accept action and action format consistency in logs
> On 1 July 2019 03:03 Alexandre DERUMIER wrote: > > > >>I always tried to minimize log overhead. If you log ACCEPT, that will > >>generate very large amounts of logs? > > yes sure, but we have the option to set nolog for in/out default rules. Ah, good. > I have some server where customer want all accept out, but I need to log all > access. > (currently, only way is to add an extra rules ACCEPT at the end) yes, you are right - thats clumsy... ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] pve-firewall : log for default accept action and action format consistency in logs
I always tried to minimize log overhead. If you log ACCEPT, that will generate very large amounts of logs? > On 29 June 2019 19:15 Alexandre DERUMIER wrote: > > > Hi, > > > I have noticed that when default action is accept, no log are currently > generated. > > > They are no log for ACCEPT in ruleset_add_chain_policy(). can we add it ? > > > sub ruleset_add_chain_policy { > my ($ruleset, $chain, $ipversion, $vmid, $policy, $loglevel, > $accept_action) = @_; > > if ($policy eq 'ACCEPT') { > > my $rule = { action => 'ACCEPT' }; > rule_substitude_action($rule, { ACCEPT => $accept_action}); > ruleset_generate_rule($ruleset, $chain, $ipversion, $rule); > > } elsif ($policy eq 'DROP') { > > ruleset_addrule($ruleset, $chain, "", "-j PVEFW-Drop"); > > ruleset_addrule($ruleset, $chain, "", "-j DROP", $loglevel, "policy > $policy: ", $vmid); > } elsif ($policy eq 'REJECT') { > ruleset_addrule($ruleset, $chain, "", "-j PVEFW-Reject"); > > ruleset_addrule($ruleset, $chain, "", "-g PVEFW-reject", $loglevel, > "policy $policy: ", $vmid); > } else { > # should not happen > die "internal error: unknown policy '$policy'"; > } > } > > > > > Another thing is thats actions ACCEPT/REJECT/DROP for a rule log, are > replaced by > > if ($direction eq 'OUT') { > rule_substitude_action($rule, { ACCEPT => > "PVEFW-SET-ACCEPT-MARK", REJECT => "PVEFW-reject" }); > ruleset_generate_rule($ruleset, $chain, $ipversion, > $rule, $cluster_conf, $vmfw_conf, $vmid); > } else { > rule_substitude_action($rule, { ACCEPT => $in_accept , > REJECT => "PVEFW-reject" }); > ruleset_generate_rule($ruleset, $chain, $ipversion, > $rule, $cluster_conf, $vmfw_conf, $vmid); > } > > > This is need for iptables rules, but in log, it's really strange to in > "PVEFW-SET-ACCEPT-MARK" instead "accept" for accept out rules. > I think we should keep ACCEPT/REJECT/DROP in the log, like for default rules. > > What do you think about this ? > > ___ > pve-devel mailing list > pve-devel@pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [RFC v2 0/3] fix #1291: add purge option for VM/CT destroy
> On 28 June 2019 15:05 Thomas Lamprecht wrote: > > > On 6/26/19 6:02 PM, Christian Ebner wrote: > > The purge flag allows to remove the vmid from the vzdump.cron backup jobs on > > VM/CT destruction. > > > > A few things I'm still missing: > * Web GUI integration (simple checkbox?) > * remove from replication? > * purge RRD stats? purge RRD stats is not really possible, because they are distributed too all cluster nodes... ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH pve-manager 1/2] pvestatd : broadcast sdn transportzone status
> >>So do we want to generate some RRD databases with that data? > > I don't think we need a rrd here, it's a simple status (ok/error/pending/...) > on the transportzone. Ok > I don't want to stream vnet status, because it could be really huge. > (like 20 servers broadcasting 300vnets for example). Yes, I would also want to avoid sending too much data over this interface. > I the gui, I would like to display transportzone like a storage in the left > tree. > Then for detail, click on the transportzone (like the volumes display on the > storage on right pane), > then query vnets status on the specific node at this time only. > > > But I can use implement colon lists, no problem. Thanks. ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH pve-manager 1/2] pvestatd : broadcast sdn transportzone status
I am not sure if json is a good idea here. We use colon separated lists for everything else, so I would prefer that. It is easier to parse inside C, which is important when you want to generate RRD databases from inside pmxcfs ... Also, consider that it is quite hard to change that format later, because all cluster nodes reads/write that data. So do we want to generate some RRD databases with that data? > On 25 June 2019 00:04 Alexandre Derumier wrote: > > > Signed-off-by: Alexandre Derumier > --- > PVE/Service/pvestatd.pm | 22 ++ > 1 file changed, 22 insertions(+) > > diff --git a/PVE/Service/pvestatd.pm b/PVE/Service/pvestatd.pm > index e138b2e8..bad1b73d 100755 > --- a/PVE/Service/pvestatd.pm > +++ b/PVE/Service/pvestatd.pm > @@ -37,6 +37,12 @@ PVE::Status::Plugin->init(); > > use base qw(PVE::Daemon); > > +my $have_sdn; > +eval { > +require PVE::API2::Network::SDN; > +$have_sdn = 1; > +}; > + > my $opt_debug; > my $restart_request; > > @@ -457,6 +463,16 @@ sub update_ceph_version { > } > } > > +sub update_sdn_status { > + > +if($have_sdn) { > + my ($transport_status, $vnet_status) = PVE::Network::SDN::status(); > + > + my $status = $transport_status ? encode_json($transport_status) : undef; > + PVE::Cluster::broadcast_node_kv("sdn", $status); > +} > +} > + > sub update_status { > > # update worker list. This is not really required and > @@ -524,6 +540,12 @@ sub update_status { > $err = $@; > syslog('err', "error getting ceph services: $err") if $err; > > +eval { > + update_sdn_status(); > +}; > +$err = $@; > +syslog('err', "sdn status update error: $err") if $err; > + > } > > my $next_update = 0; > -- > 2.20.1 > > ___ > pve-devel mailing list > pve-devel@pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] applied: [PATCH pve-network 0/3] add ifquery && status code
applied > On 03 June 2019 at 17:57 Alexandre Derumier wrote: > > > prelimary code to handle status of vnets && transports > for later use in pvestatd > > Alexandre Derumier (3): > create api: test if $scfg vnet exist > add ifquery compare status > add test statuscheck.pl > > PVE/API2/Network/Network.pm | 2 +- > PVE/Network/Network.pm | 28 > test/statuscheck.pl | 44 > 3 files changed, 73 insertions(+), 1 deletion(-) > create mode 100644 test/statuscheck.pl > > -- > 2.11.0 > > ___ > pve-devel mailing list > pve-devel@pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] ZFS 0.8 and __kernel_fpu_{begin,restore} symbols
> > I have seen zfs 0.8 has been merged in to the pve kernel master branch > > including the update to Kernel 5.0.8 for buster. > > > > Does the Ubuntu/PVE kernel export the __kernel_fpu symbols? > > I ask because I didn't found a patch for this and it would be painful if > > we have zfs with crypto without good performance. > > > > The patch used by NixOS https://github.com/NixOS/nixpkgs/pull/61076 > > mentions a throughput drop of 1GB/s (200 instead of 1.2 GB/s) without > > the exported symbols. > > No. I guess this would be a kernel license violation ... Oh, maybe not - we will take a closer look at that. ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] ZFS 0.8 and __kernel_fpu_{begin,restore} symbols
> I have seen zfs 0.8 has been merged in to the pve kernel master branch > including the update to Kernel 5.0.8 for buster. > > Does the Ubuntu/PVE kernel export the __kernel_fpu symbols? > I ask because I didn't found a patch for this and it would be painful if > we have zfs with crypto without good performance. > > The patch used by NixOS https://github.com/NixOS/nixpkgs/pull/61076 > mentions a throughput drop of 1GB/s (200 instead of 1.2 GB/s) without > the exported symbols. No. I guess this would be a kernel license violation ... ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] opensource vm scheduler : btrplace
> Now, the main problem, is that it's java. (seem that scientific like it, > redhat rhev/ovirt have also implement scheduling algo model with java). > I don't known if it could be implemented in proxmox? (or at least with a > daemon like the daemon, and rest api call from perl to java? Importing java > class in perl ???) We will not include any java code. ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] qemu : add disk option for physical/logical block (could improve windows guest performance wit ceph)
> could we add this as disk option ? (not defined by default). Sounds like a good idea. Maybe we can even set a better default (512e)? ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [RFC manager 1/2] node: add journal api
comments inline > On 13 May 2019 at 14:49 Dominik Csapak wrote: > > > this uses the new journalreader instead of journalctl, which is a bit > faster and can read from/to cursor and returns a start/end cursor > > also you can give an unix epoch as time parameters > > Signed-off-by: Dominik Csapak > --- > PVE/API2/Nodes.pm | 52 > 1 file changed, 52 insertions(+) > > diff --git a/PVE/API2/Nodes.pm b/PVE/API2/Nodes.pm > index df47be1f..7f13f809 100644 > --- a/PVE/API2/Nodes.pm > +++ b/PVE/API2/Nodes.pm > @@ -699,6 +699,58 @@ __PACKAGE__->register_method({ > return $lines; > }}); > > +__PACKAGE__->register_method({ > +name => 'journal', > +path => 'journal', > +method => 'GET', > +description => "Read Journal", > +proxyto => 'node', > +permissions => { > + check => ['perm', '/nodes/{node}', [ 'Sys.Syslog' ]], > +}, > +protected => 1, > +parameters => { > + additionalProperties => 0, > + properties => { > + node => get_standard_option('pve-node'), > + since => { > + type=> 'number', > + description => "Display all log since this UNIX epoch.", > + optional => 1, > + }, > + until => { > + type=> 'number', > + description => "Display all log until this UNIX epoch.", > + optional => 1, > + }, > + lastentries => { Please can we get a description for all parameters? > + type => 'integer', > + optional => 1, > + }, > + startcursor => { > + type => 'string', > + optional => 1, > + }, > + endcursor => { > + type => 'string', > + optional => 1, > + }, > + }, > +}, > +returns => { > + type => 'array', > +}, > +code => sub { > + my ($param) = @_; > + > + my $rpcenv = PVE::RPCEnvironment::get(); > + my $user = $rpcenv->get_user(); > + > + return PVE::Tools::read_journal($param->{since}, $param->{until}, > + $param->{last}, $param->{startcursor}, $param->{endcursor}); > + > +}}); > + > my $sslcert; > > my $shell_cmd_map = { > -- > 2.11.0 > > > ___ > pve-devel mailing list > pve-devel@pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH 0/2] fix #2190: allow multiple words separated by whitespaces in SMBIOS manufacturer string
It is probably a bad idea to allow newlines! > On 09 May 2019 at 10:28 Christian Ebner wrote: > > > Christian Ebner (1): > fix #2190: allow multiple words separated by whitespaces in SMBIOS > manufacturer string > > PVE/QemuServer.pm | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > Christian Ebner (1): > fix: #2190 allow multiple words separated by whitespaces in SMBIOS > manufacturer string > > www/manager6/qemu/Smbios1Edit.js | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > -- > 2.11.0 > > ___ > pve-devel mailing list > pve-devel@pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] applied: [PATCH pve-network 0/3] plugins update && networks.cfg.new
applied ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel