Hi,
if we try to run backup in pause mode with guest agent,
it seem that the fsfreeze qmp command is send after the suspend, so the guest
agent is not responding.
INFO: suspend vm
INFO: snapshots found (not included into backup)
INFO: creating archive
> My idea was to add some kind of job queue for the backup storage
You could use the pmxcfs locking feature (instead of local lock)...
That way only one task can be started cluster wide.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
On 11/26/2015 09:04 AM, Dietmar Maurer wrote:
My idea was to add some kind of job queue for the backup storage
You could use the pmxcfs locking feature (instead of local lock)...
That way only one task can be started cluster wide.
The cfs_lock_domain method could be helpful here:
add
eat! I'll try to look at this.
- Mail original -
De: "Thomas Lamprecht" <t.lampre...@proxmox.com>
À: "pve-devel" <pve-devel@pve.proxmox.com>
Envoyé: Jeudi 26 Novembre 2015 09:42:12
Objet: Re: [pve-devel] backup : adding a storage option like "max concurrent
backup&quo
> The problem is that if you defined a backup job for all nodes at a specific
> time,
>
> It's launching the backup job on all nodes at the same time,
We usually simply define one job per node.
> and backup storage or network can be overloaded.
Or you limit vzdump bwlimit on the nodes.
l" <pve-devel@pve.proxmox.com>
Envoyé: Mercredi 25 Novembre 2015 19:16:34
Objet: Re: [pve-devel] backup : adding a storage option like "max concurrent
backup" ?
> The problem is that if you defined a backup job for all nodes at a specific
> time,
>
> It's la
> On September 26, 2015 at 12:06 PM Michael Rasmussen wrote:
>
>
> On Sat, 26 Sep 2015 11:49:57 +0200 (CEST)
> Dietmar Maurer wrote:
>
> >
> > Is there an white space at the end of this line? If so, remove it.
> >
> No white space.
But your post
> 15 5 * * 6 root vzdump 109 114 117 144 128 --mailnotification
> failure --quiet 1 --mode snapshot --mailto x@y.z --node esx1 --compress
> lzo --storage qnap_nfs
> 15 5 * * 5 root vzdump 115 125 143 148 152 102 154 156 110
> --mailnotification failure --quiet 1 --mailto x@y.z
On Sat, 26 Sep 2015 11:46:11 +0200 (CEST)
Dietmar Maurer wrote:
>
> You call your proxmox nodes esx1 and esx2 - interesting ;-)
>
Bad happit from work ;-)
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
On Sat, 26 Sep 2015 10:13:54 +0200 (CEST)
Dietmar Maurer wrote:
>
> Please can you post the crontab file?
>
# cat /etc/pve/vzdump.cron
# cluster wide vzdump cron schedule
# Atomatically generated file - do not edit
PATH="/usr/sbin:/usr/bin:/sbin:/bin"
15 5 * * 6
> > Please can you post the crontab file?
> >
> # cat /etc/pve/vzdump.cron
> # cluster wide vzdump cron schedule
> # Atomatically generated file - do not edit
>
> PATH="/usr/sbin:/usr/bin:/sbin:/bin"
>
> 15 5 * * 6 root vzdump 109 114 117 144 128 --mailnotification
> failure --quiet
On Sat, 26 Sep 2015 12:52:04 +0200 (CEST)
Dietmar Maurer wrote:
>
> But your post includes a white space at the end - why?
>
MUA or list server line breaks (the line is longer than 72 characters)?
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael
> Every time the backup scheduler runs is see this in log for every VM
> that is backed up:
> Use of uninitialized value $cmd[8] in exec
> at /usr/share/perl/5.14/IPC/Open3.pm line 186.
Please can you post the crontab file?
___
pve-devel mailing list
Hi all,
Every time the backup scheduler runs is see this in log for every VM
that is backed up:
Use of uninitialized value $cmd[8] in exec
at /usr/share/perl/5.14/IPC/Open3.pm line 186.
proxmox-ve-2.6.32: 3.4-163 (running kernel: 3.10.0-11-pve)
pve-manager: 3.4-11 (running version:
07/11/2015 09:25 AM, Cesar Peschiera wrote:
Yes, that makes sense to me.
Or maybe the PVE GUI also can has a third option for use pgzip, and the
user also can select the amount of cores to used, so in this case, also
maybe will be better add a message of caution that say the use of many
...@gmail.com
Cc: pve-u...@pve.proxmox.com; pve-devel@pve.proxmox.com
Sent: Saturday, July 11, 2015 12:27 AM
Subject: Re: [pve-devel] backup : add pigz compressor
You could even make it so using pigz requires a setting in
vzdump.conf. So in GUI you still can only select gzip or lzop.
If set
, it would be fantastic.
- Original Message -
From: Dietmar Maurer diet...@proxmox.com
To: Eric Blevins ericlb...@gmail.com
Cc: pve-u...@pve.proxmox.com; pve-devel@pve.proxmox.com
Sent: Saturday, July 11, 2015 12:27 AM
Subject: Re: [pve-devel] backup : add pigz compressor
You could
Problem with pigz is that it bails out on missing symbolic links. Eg. follow
symlinks is on and cannot be disabled.
On July 10, 2015 6:51:21 AM CEST, Alexandre DERUMIER aderum...@odiso.com
wrote:
Hi,
user from pve-user mailing report than using pigz (multithread gzip),
improve a lot speed of
But pigz needs all you CPU power do be that fast! If you have running VMs,
it will be as slow as normal gzip, or it will massively slow down the VMs.
I am really afraid this will trigger many support calls ...
Make the default number of pigz CPUs 2, this should not have much of a
CPU impact
You could even make it so using pigz requires a setting in
vzdump.conf. So in GUI you still can only select gzip or lzop.
If set to gzip and vzdump.conf has pigz:on then pigz is used instead of gzip.
Most novice users are only going to use the GUI, this would reduce the
likelyhood of them
.
- Mail original -
De: dietmar diet...@proxmox.com
À: aderumier aderum...@odiso.com, pve-devel pve-devel@pve.proxmox.com
Cc: proxmoxve pve-u...@pve.proxmox.com, Chidester, Bryce
bryce.chides...@calyptix.com
Envoyé: Vendredi 10 Juillet 2015 07:05:07
Objet: Re: [pve-devel] backup : add pigz compressor
user from pve-user mailing report than using pigz (multithread gzip),
improve a lot speed of compression backup.
Maybe could we use add it ? (or maybe replace gzip ?)
We already have lzop, which is as fast as pgzip and uses less resources.
So what is the advantage of pgzip? IMHO a parallel
Hi,
user from pve-user mailing report than using pigz (multithread gzip),
improve a lot speed of compression backup.
Maybe could we use add it ? (or maybe replace gzip ?)
- Mail original -
De: Chidester, Bryce bryce.chides...@calyptix.com
À: proxmoxve pve-u...@pve.proxmox.com
Envoyé:
I can confirm that bug.
TBH, I've been seeing this for months, I presumed it was well known.
Happens when a backup fails as well - last power outage we had was in the
middle of backups and two VM's refused to start on node start because they
were locked
Well, this is a different
On Wed, 24 Dec 2014 09:29:18 AM Dietmar Maurer wrote:
Happens when a backup fails as well - last power outage we had was in the
middle of backups and two VM's refused to start on node start because
they
were locked
Well, this is a different problem.
is it? vm is locked because
On Wed, 24 Dec 2014 09:29:18 AM Dietmar Maurer wrote:
Happens when a backup fails as well - last power outage we had was in the
middle of backups and two VM's refused to start on node start because
they
were locked
Well, this is a different problem.
is it? vm is locked
On Wed, 24 Dec 2014 11:57:52 AM Dietmar Maurer wrote:
On power failure, it is impossible to remove the lock, because we are
already dead.
I guess we should remove that lock when we start a node.
But this bug is about removing the lock after receiving a signal (TERM).
Ah yes, I see your
On Wed, 24 Dec 2014 01:53:40 PM Dietmar Maurer wrote:
On Wed, 24 Dec 2014 11:57:52 AM Dietmar Maurer wrote:
On power failure, it is impossible to remove the lock, because we are
already dead.
I guess we should remove that lock when we start a node.
But this bug is about
在 2014年12月23日,下午3:18,Gökalp Çakıcı gokalp.cak...@pusula.net.tr 写道:
You can cancel the job via web interface while doing backup job. And i try
the scenario and reproduce the situation. When you use the stop button and
try to start the vm it says locked. First you unlock it from the command
在 2014年12月23日,下午11:35,Dietmar Maurer diet...@proxmox.com 写道:
You can cancel the job via web interface while doing backup job. And i try
the scenario and reproduce the situation. When you use the stop button and
try to start the vm it says locked. First you unlock it from the command
line
On Tue, 23 Dec 2014 04:35:09 PM Dietmar Maurer wrote:
You can cancel the job via web interface while doing backup job. And i
try
the scenario and reproduce the situation. When you use the stop button
and
try to start the vm it says locked. First you unlock it from the command
hi,all
INFO: starting new backup job: vzdump 121 --remove 0 --mode snapshot --compress
lzo --storage local --node t3
INFO: Starting Backup of VM 121 (qemu)
INFO: status = running
INFO: VM is locked (backup)
ERROR: Backup of VM 121 failed - command 'qm set 121 --lock backup' failed:
exit code 25
On 12/23/2014 02:58 AM, lyt_yudi wrote:
hi,all
INFO: starting new backup job: vzdump 121 --remove 0 --mode snapshot --compress
lzo --storage local --node t3
INFO: Starting Backup of VM 121 (qemu)
INFO: status = running
INFO: VM is locked (backup)
ERROR: Backup of VM 121 failed - command 'qm
sorry, forgot send to pve-devel :(
在 2014年12月23日,下午12:59,Dietmar Maurer diet...@proxmox.com
mailto:diet...@proxmox.com 写道:
This indicates that something is wrong, maybe a crashed backup job?
no, it’s cancel the backup job by manual, and then start new backup job.
You should check if
在 2014年12月23日,下午1:20,lyt_yudi lyt_y...@icloud.com 写道:
This indicates that something is wrong, maybe a crashed backup job?
INFO: starting new backup job: vzdump 121 --remove 0 --mode snapshot --compress
lzo --storage local --node t3
INFO: Starting Backup of VM 121 (qemu)
INFO: status =
Can you pleas post me your version of PVE.
pveversion -v
because I can't reproduce it at my machine, not on the gui and not at cli.
how do you cancel the job manually?
Regrades
Wolfgang
On December 23, 2014 at 6:36 AM lyt_yudi lyt_y...@icloud.com wrote:
在
You can cancel the job via web interface while doing backup job. And i
try the scenario and reproduce the situation. When you use the stop
button and try to start the vm it says locked. First you unlock it from
the command line with qm unlock vmid and then you can start the vm.
On 12/23/14
:-(
-Ursprüngliche Nachricht-
Von: pve-devel [mailto:pve-devel-boun...@pve.proxmox.com] Im Auftrag von Dmitry
Petuhov
Gesendet: Freitag, 17. Oktober 2014 08:04
An: pve-devel@pve.proxmox.com
Betreff: Re: [pve-devel] backup ceph high iops and slow
16.10.2014 22:33, VELARTIS Philipp
Dear,
to backup container to nfs in suspend mode, I get many errors
rsync chown operation not permitted
and the backup end with errors
My question is, ist this a nfs problem or a problem of proxmox?
I have longer time before create the NFS via the GUI and proxmox has
create a mount link
Hi,
i had the following situation several times in the past.
1.) Backup Job started - hangs forever on a VM looping in boot mode as
there was not an OS installed
2.) At the next day the backup job can't start as the one from before is
still running.
I think it would be very important to get an
i had the following situation several times in the past.
1.) Backup Job started - hangs forever on a VM looping in boot mode as there
was not an OS installed
2.) At the next day the backup job can't start as the one from before is still
running.
I think it would be very important to
Am 22.10.2014 um 09:16 schrieb Dietmar Maurer:
i had the following situation several times in the past.
1.) Backup Job started - hangs forever on a VM looping in boot mode as
there
was not an OS installed
2.) At the next day the backup job can't start as the one from before is
still
Will do so. Can you point me to the part where vzdump checks whether there is
a process already running? I was not able to find that one.
In PVE/API2/VZDump.pm, search for getlock().
getlock is defined in PVE/VZDump.pm
___
pve-devel mailing list
On 20 October 2014 15:27, Dmitry Petuhov mityapetu...@gmail.com wrote:
Yes, slow NFS servers/network is known to cause failing backup with VM
crush. As workaround, you can try enable GZIP compression. It shall slow
down NFS writes enough to let NAS server do its job. If it's not enough
then
On 20 October 2014 14:15, Cesar Peschiera br...@click.com.py wrote:
Hi Lindsay
Maybe you'd better have a PVE on a PC as Backup Server instead of a NAS.
Why i believe that it will be better?
For four reasons:
1) You will can manually restore the backups that were completed
successfully on
Sent: Monday, October 20, 2014 11:59 PM
Subject: Re: [pve-devel] backup
On 20 October 2014 14:15, Cesar Peschiera br...@click.com.py wrote:
Hi Lindsay
Maybe you'd better have a PVE on a PC as Backup Server instead of a NAS.
Why i believe that it will be better
On Sat, 18 Oct 2014 10:56:18 AM Dietmar Maurer wrote:
Maybe a setting in the backup gui when checked prevents running
parallel backups could solve this issue?
It is quite easy to setup a separate job for each node.
Not if you need to exclude some vm's, but migrate them regularly
18 Octobre 2014 09:47:08
Objet: RE: [pve-devel] backup ceph high iops and slow
We read 64K blocks, so
Don't for for backup, but drive-mirror have a granulary option to change
the
block size:
# @granularity: #optional granularity of the dirty bitmap, default is 64K.
# Must be a power
+RBD supports read-ahead/prefetching to optimize small, sequential reads.
+This should normally be handled by the guest OS in the case of a VM,
How should we do read-ahead inside qemu? manually?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
, VELARTIS Philipp Dürhammer
p.duerham...@velartis.at, Dmitry Petuhov mityapetu...@gmail.com
Envoyé: Dimanche 19 Octobre 2014 18:07:30
Objet: RE: [pve-devel] backup ceph high iops and slow
+RBD supports read-ahead/prefetching to optimize small, sequential reads.
+This should normally
On Sun, 19 Oct 2014 07:18:34 AM Dietmar Maurer wrote:
On Sat, 18 Oct 2014 10:56:18 AM Dietmar Maurer wrote:
Maybe a setting in the backup gui when checked prevents running
parallel backups could solve this issue?
It is quite easy to setup a separate job for each node.
20.10.2014 02:20, Lindsay Mathieson пишет:
True and in my case I wonder if the problem is my NAS. Backups to
locally attached USB drive are no problem, but I've always had
problems backing up to my NAS over GB ethernet, using NFS or SMB. I'd
often get 1 or 2 VM's failing with disk write
We read 64K blocks, so
Don't for for backup, but drive-mirror have a granulary option to change the
block size:
# @granularity: #optional granularity of the dirty bitmap, default is 64K.
# Must be a power of 2 between 512 and 64M.
Although it would be much easier it
Hi all,
I just discovered that when Proxmox initiates the configured automatic
backup it runs backups from each node in the cluster in parallel. I
wonder whether this could be the cause for users now and then see some
of the backups fails given the backup storage is to slow storage?
Maybe a
I just discovered that when Proxmox initiates the configured automatic backup
it
runs backups from each node in the cluster in parallel. I wonder whether this
could be the cause for users now and then see some of the backups fails given
the backup storage is to slow storage?
Maybe a
On Sat, 18 Oct 2014 10:56:18 +
Dietmar Maurer diet...@proxmox.com wrote:
It is quite easy to setup a separate job for each node.
I know, but maybe users are not aware of this complication. If it can
remove a lot of noise on the forum from people with failing backups I
think it is worth
It is quite easy to setup a separate job for each node.
I know, but maybe users are not aware of this complication.
If it can remove a
lot of noise on the forum from people with failing backups I think it is
worth
implementing.
1.) I don't think this is the reason for the problems.
Well...
I have just one node here, because the other server doesn't arrive yet, but
I already create the cluster...
And my backup task are extremaly slow!
It a file about 23 GB, that take almost 5 hours.
Ok! I ran backup over LVM-on-top-iSCSI
VMIDNAMESTATUSTIMESIZEFILENAME
On Sat, 18 Oct 2014 13:03:57 -0300
Gilberto Nunes gilberto.nune...@gmail.com wrote:
Well...
I have just one node here, because the other server doesn't arrive yet, but
I already create the cluster...
And my backup task are extremaly slow!
It a file about 23 GB, that take almost 5 hours.
Oh Dear! Qnap! Pity of you! Qnap has only 128 MB of memory and low
hardware...
Here, I do backup locally, in an External HDD USB
To one server, I saw here around 15-18 MB/s, but there's some ocasion that
I saw less than 10 MB/s
Compare to other server, I saw 60-80 MB/s...
2014-10-18
On Sat, 18 Oct 2014 10:56:18 AM Dietmar Maurer wrote:
Maybe a setting in the backup gui when checked prevents running parallel
backups could solve this issue?
It is quite easy to setup a separate job for each node.
Not if you need to exclude some vm's, but migrate them regularly for load
16.10.2014 22:33, VELARTIS Philipp Dürhammer пишет:
Why do backups with ceph make so high iops?
I get around 600 iops for 40mb/sec which is by the way very slow for a backup.
When I make a disk clone from local to ceph I get 120mb/sec (which is the
network limit from the old proxmox nodes) and
.
- Mail original -
De: Dmitry Petuhov mityapetu...@gmail.com
À: pve-devel@pve.proxmox.com
Envoyé: Vendredi 17 Octobre 2014 08:04:04
Objet: Re: [pve-devel] backup ceph high iops and slow
16.10.2014 22:33, VELARTIS Philipp Dürhammer пишет:
Why do backups with ceph make so high iops?
I get
zero block on read.
for restore no problem.
For drive-mirror write to target, it's also missing a feature to skip
zero-block write.
- Mail original -
De: Dmitry Petuhov mityapetu...@gmail.com
À: pve-devel@pve.proxmox.com
Envoyé: Vendredi 17 Octobre 2014 08:04:04
Objet: Re: [pve-devel
Auftrag von Dmitry
Petuhov
Gesendet: Freitag, 17. Oktober 2014 09:25
An: Alexandre DERUMIER
Cc: pve-devel@pve.proxmox.com
Betreff: Re: [pve-devel] backup ceph high iops and slow
I think that skipping free space isn't main issue: backup of used space is even
slower.
17.10.2014 10:31, Alexandre
diet...@proxmox.com
À: VELARTIS Philipp Dürhammer p.duerham...@velartis.at, Dmitry Petuhov
mityapetu...@gmail.com, Alexandre DERUMIER aderum...@odiso.com
Cc: pve-devel@pve.proxmox.com
Envoyé: Vendredi 17 Octobre 2014 13:02:29
Objet: RE: [pve-devel] backup ceph high iops and slow
I agree. The main
Why do backups with ceph make so high iops?
I get around 600 iops for 40mb/sec which is by the way very slow for a backup.
When I make a disk clone from local to ceph I get 120mb/sec (which is the
network limit from the old proxmox nodes) and only around 100-120 iops which is
the normal for a
Hi all,
Recently I am beginning to see logs like can be seen below. It is the
automatic backup which fails backing up some of the VMs but it seems
completely random which VM will fail backup when backup is performed.
Another observation is this warning given when every backup starts: Use
of
So the WebGUI for backups basically edits a cron job?
yes
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
On Sun, 6 Jul 2014 03:25:16 AM Dietmar Maurer wrote:
Any possibility of adding per month schedules to backups? this would be
very useful for our DR backups, where we rotate offsite disks once per
month.
cron supports that, so it should not be that hard to implement it (but
someone needs to
No answer myself. No it's not working for rbd:
-
INFO: starting new backup job: vzdump 123 --remove 0 --mode snapshot
--compress lzo --storage local --node node1283
INFO: Starting Backup of VM 123 (qemu)
INFO: status = running
ERROR: Backup of VM 123
@pve.proxmox.com
Envoyé: Samedi 16 Février 2013 13:16:21
Objet: Re: [pve-devel] Backup code / part
No answer myself. No it's not working for rbd:
-
INFO: starting new backup job: vzdump 123 --remove 0 --mode snapshot
--compress lzo --storage local --node node1283
I can Look too should it generally work through vzdump and mode snapshot
yes
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
I just posted the backup patches to the qemu-devel list.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
: error: expected expression before ‘BackupCB’
make: *** [vma] Error 1
- Mail original -
De: Dietmar Maurer diet...@proxmox.com
À: pve-devel@pve.proxmox.com
Envoyé: Mardi 13 Novembre 2012 14:04:02
Objet: [pve-devel] backup RFC preview
This is a preview of the planed backup feature
Just sent an updated version - does that work better?
-Original Message-
From: Alexandre DERUMIER [mailto:aderum...@odiso.com]
Sent: Montag, 19. November 2012 12:24
To: Dietmar Maurer
Cc: pve-devel@pve.proxmox.com
Subject: Re: [pve-devel] backup RFC preview
Does it help when
$(tools-obj-y) $(block-obj-y)
- Mail original -
De: Dietmar Maurer diet...@proxmox.com
À: Alexandre DERUMIER aderum...@odiso.com
Cc: pve-devel@pve.proxmox.com
Envoyé: Lundi 19 Novembre 2012 12:30:23
Objet: RE: [pve-devel] backup RFC preview
Just sent an updated version - does
-devel@pve.proxmox.com
Subject: Re: [pve-devel] backup RFC preview
Just sent an updated version - does that work better?
no :/
Last qemu git:
I got Hunk Failed on Makefile
root@kvmtest1:~/qemu2/qemu# patch -p1 patch1.patch patching file
docs/backup-rfc.txt root@kvmtest1:~/qemu2/qemu
osdep.o: In function `qemu_close':
/root/qemu2/qemu/osdep.c:212: undefined reference to
`monitor_fdset_dup_fd_find'
/root/qemu2/qemu/osdep.c:218: undefined reference to
`monitor_fdset_dup_fd_remove'
AFAIK my patches does not touch those files - maybe current qemu build is
broken?
13:41:03
Objet: RE: [pve-devel] backup RFC preview
osdep.o: In function `qemu_close':
/root/qemu2/qemu/osdep.c:212: undefined reference to
`monitor_fdset_dup_fd_find'
/root/qemu2/qemu/osdep.c:218: undefined reference to
`monitor_fdset_dup_fd_remove'
AFAIK my patches does not touch
@pve.proxmox.com
Envoyé: Lundi 19 Novembre 2012 13:37:54
Objet: RE: [pve-devel] backup RFC preview
My patches are against commit 6801038bc52d61f81ac8a25fbe392f1bad982887
Try to use that commit.
Maybe I should expose my internal qemu-git tree?
-Original Message-
From: Alexandre
: pve-devel@pve.proxmox.com
Envoyé: Lundi 19 Novembre 2012 15:08:48
Objet: Re: [pve-devel] backup RFC preview
Mhm with -monitor the qm monitor shell stuff doesn't work I always get the
failure that qmp human interface cannot be found.
I can get it work in pve gui, with renaming qemu-system
# ./x86_64-softmmu/qemu-system-x86_64 -monitor stdio -enable-kvm -
hda image.raw ...
you meant -qmp not monitor?
no
Mhm with -monitor the qm monitor shell stuff doesn't work I always get the
failure that qmp human interface cannot be found.
Sure, this does not work with 'qm'. The
Am 18.11.2012 10:28, schrieb Dietmar Maurer:
# ./x86_64-softmmu/qemu-system-x86_64 -monitor stdio -enable-kvm -
hda image.raw ...
you meant -qmp not monitor?
no
Mhm with -monitor the qm monitor shell stuff doesn't work I always get the
failure that qmp human interface cannot be found.
Am 18.11.2012 um 07:31 schrieb Dietmar Maurer diet...@proxmox.com:
# ./x86_64-softmmu/qemu-system-x86_64 -monitor stdio -enable-kvm -
hda image.raw ...
you meant -qmp not monitor?
no
Mhm with -monitor the qm monitor shell stuff doesn't work I always get the
failure that qmp human
-devel@pve.proxmox.com
Envoyé: Mercredi 14 Novembre 2012 07:18:36
Objet: Re: [pve-devel] backup RFC preview
How can i test? Command line qm command?
# git clone git://git.qemu-project.org/qemu.git
then apply the patches.
# cd qemu
# ./configure --target-list=x86_64-softmmu
# make
You can
Are you going to use qemu instead qemu-kvm for 1.3 release ? (I have read
that kvm will be finally officily merged in qemu for 1.3)
Only if they announce that qemu-kvm is obsolete.
I currently use qemu for the backup tests because it is required
to send patches against qemu.
Priebe - Profihost AG
s.pri...@profihost.ag
Envoyé: Mercredi 14 Novembre 2012 16:41:29
Objet: RE: [pve-devel] backup RFC preview
Are you going to use qemu instead qemu-kvm for 1.3 release ? (I have read
that kvm will be finally officily merged in qemu for 1.3)
Only if they announce
I currently use qemu for the backup tests because it is required to
send patches against qemu.
Oh, you want to push patches upstream ? Great !
Yes, that is the plan. It would be great if we can get at least the
basic framework upstream (PATCH 2/4).
Else we should think twice if we can
Priebe - Profihost AG
s.pri...@profihost.ag
Envoyé: Mercredi 14 Novembre 2012 18:31:30
Objet: RE: [pve-devel] backup RFC preview
I currently use qemu for the backup tests because it is required to
send patches against qemu.
Oh, you want to push patches upstream ? Great !
Yes
This is a preview of the planed backup feature.
The following patches are against latest qemu.git
I plan to send the patch to the qemu-devel list next week.
Before I do that, I would like to get some feedback about:
* functionality
* implementation style
* possible improvements
Feel free to
Thanks Dietmar,
I'll try to do a feedback tomorrow !
- Mail original -
De: Dietmar Maurer diet...@proxmox.com
À: pve-devel@pve.proxmox.com
Envoyé: Mardi 13 Novembre 2012 14:04:02
Objet: [pve-devel] backup RFC preview
This is a preview of the planed backup feature
Am 13.11.2012 14:04, schrieb Dietmar Maurer:
This is a preview of the planed backup feature.
The following patches are against latest qemu.git
I plan to send the patch to the qemu-devel list next week.
Before I do that, I would like to get some feedback about:
* functionality
*
So this is right now only for qcow / disk based backups? Or a general
solution?
I do not know what you mean by 'general' solution? But it does not depend on
qcow2,
instead it works on any storage type and file format.
How can i test? Command line qm command?
Please use the qemu monitor
94 matches
Mail list logo