[pve-devel] backup suspend mode with guest agent enable : fsfreeze timeout

2016-09-29 Thread Alexandre DERUMIER
Hi,

if we try to run backup in pause mode with guest agent,
it seem that the fsfreeze qmp command is send after the suspend, so the guest 
agent is not responding.


INFO: suspend vm
INFO: snapshots found (not included into backup)
INFO: creating archive 
'/var/lib/vz/dump/vzdump-qemu-104-2016_09_29-12_06_13.vma.lzo'
ERROR: VM 104 qmp command 'guest-fsfreeze-freeze' failed - got timeout
ERROR: VM 104 qmp command 'guest-fsfreeze-thaw' failed - got timeout


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup : adding a storage option like "max concurrent backup" ?

2015-11-26 Thread Dietmar Maurer
> My idea was to add some kind of job queue for the backup storage

You could use the pmxcfs locking feature (instead of local lock)...

That way only one task can be started cluster wide.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup : adding a storage option like "max concurrent backup" ?

2015-11-26 Thread Thomas Lamprecht



On 11/26/2015 09:04 AM, Dietmar Maurer wrote:

My idea was to add some kind of job queue for the backup storage

You could use the pmxcfs locking feature (instead of local lock)...

That way only one task can be started cluster wide.


The cfs_lock_domain method could be helpful here:


add function to lock a domain

This can be used to execute code on an 'action domain' basis.
E.g.: if there are actions that cannot be run simultaneously even if
they, for example, don't access a common file and maybe also spread
across different packages we can now secure the consistence of said
actions on an 'action domain' basis.




___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup : adding a storage option like "max concurrent backup" ?

2015-11-26 Thread Alexandre DERUMIER
>> My idea was to add some kind of job queue for the backup storage 
> You could use the pmxcfs locking feature (instead of local lock)... 
> 
> That way only one task can be started cluster wide. 

>>The cfs_lock_domain method could be helpful here: 

Sound great! I'll try to look at this. 

- Mail original -
De: "Thomas Lamprecht" <t.lampre...@proxmox.com>
À: "pve-devel" <pve-devel@pve.proxmox.com>
Envoyé: Jeudi 26 Novembre 2015 09:42:12
Objet: Re: [pve-devel] backup : adding a storage option like "max concurrent 
backup" ?

On 11/26/2015 09:04 AM, Dietmar Maurer wrote: 
>> My idea was to add some kind of job queue for the backup storage 
> You could use the pmxcfs locking feature (instead of local lock)... 
> 
> That way only one task can be started cluster wide. 

The cfs_lock_domain method could be helpful here: 

> add function to lock a domain 
> 
> This can be used to execute code on an 'action domain' basis. 
> E.g.: if there are actions that cannot be run simultaneously even if 
> they, for example, don't access a common file and maybe also spread 
> across different packages we can now secure the consistence of said 
> actions on an 'action domain' basis. 



___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup : adding a storage option like "max concurrent backup" ?

2015-11-25 Thread Dietmar Maurer
> The problem is that if you defined a backup job for all nodes at a specific
> time,
> 
> It's launching the backup job on all nodes at the same time,

We usually simply define one job per node.

> and backup storage or network can be overloaded.

Or you limit vzdump bwlimit on the nodes.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup : adding a storage option like "max concurrent backup" ?

2015-11-25 Thread Alexandre DERUMIER
> The problem is that if you defined a backup job for all nodes at a specific 
> time, 
> 
> It's launching the backup job on all nodes at the same time, 

>>We usually simply define one job per node. 

It can be complex, because for example, you can't known when 1rst node will be 
finished to backup
before launching backup on second node.

also, if you define a job for 1 node, and specific vms, it will not work if the 
vm is migrated on another node


> and backup storage or network can be overloaded. 

>>Or you limit vzdump bwlimit on the nodes. 

this will limit bandwith on the nodes, but if you have for example 15 nodes, 
and backup running at the same time,
you can still overload the storage.

Or you need to do complex scheduling by node, to be sure that backups are not 
running at the same time.
(which is almost impossible to do, with live migration, backups duration can 
change if vms numbers is different from one day to another day)


My idea was to add some kind of job queue for the backup storage


- Mail original -
De: "dietmar" <diet...@proxmox.com>
À: "aderumier" <aderum...@odiso.com>, "pve-devel" <pve-devel@pve.proxmox.com>
Envoyé: Mercredi 25 Novembre 2015 19:16:34
Objet: Re: [pve-devel] backup : adding a storage option like "max concurrent 
backup" ?

> The problem is that if you defined a backup job for all nodes at a specific 
> time, 
> 
> It's launching the backup job on all nodes at the same time, 

We usually simply define one job per node. 

> and backup storage or network can be overloaded. 

Or you limit vzdump bwlimit on the nodes. 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Backup scheduler: Perl error

2015-09-26 Thread Dietmar Maurer


> On September 26, 2015 at 12:06 PM Michael Rasmussen  wrote:
> 
> 
> On Sat, 26 Sep 2015 11:49:57 +0200 (CEST)
> Dietmar Maurer  wrote:
> 
> > 
> > Is there an white space at the end of this line? If so, remove it.
> > 
> No white space. 

But your post includes a white space at the end - why?

> File is purely managed through the proxmox web gui.

I tried the same commands, but did not get that warning - strange.
Someone else gets those warnings?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Backup scheduler: Perl error

2015-09-26 Thread Dietmar Maurer
> 15 5 * * 6   root vzdump 109 114 117 144 128 --mailnotification
> failure --quiet 1 --mode snapshot --mailto x@y.z --node esx1 --compress
> lzo --storage qnap_nfs 
> 15 5 * * 5   root vzdump 115 125 143 148 152 102 154 156 110
> --mailnotification failure --quiet 1 --mailto x@y.z --mode snapshot
> --node esx2 --compress lzo --storage qnap_nfs

You call your proxmox nodes esx1 and esx2 - interesting ;-)

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Backup scheduler: Perl error

2015-09-26 Thread Michael Rasmussen
On Sat, 26 Sep 2015 11:46:11 +0200 (CEST)
Dietmar Maurer  wrote:

> 
> You call your proxmox nodes esx1 and esx2 - interesting ;-)
> 
Bad happit from work ;-)

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir  datanom  net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir  miras  org
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917
--
/usr/games/fortune -es says:
"... all the modern inconveniences ..."
-- Mark Twain


pgpGiIXkQgH4X.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Backup scheduler: Perl error

2015-09-26 Thread Michael Rasmussen
On Sat, 26 Sep 2015 10:13:54 +0200 (CEST)
Dietmar Maurer  wrote:

> 
> Please can you post the crontab file?
> 
# cat /etc/pve/vzdump.cron 
# cluster wide vzdump cron schedule
# Atomatically generated file - do not edit

PATH="/usr/sbin:/usr/bin:/sbin:/bin"

15 5 * * 6   root vzdump 109 114 117 144 128 --mailnotification
failure --quiet 1 --mode snapshot --mailto x@y.z --node esx1 --compress
lzo --storage qnap_nfs 
15 5 * * 5   root vzdump 115 125 143 148 152 102 154 156 110
--mailnotification failure --quiet 1 --mailto x@y.z --mode snapshot
--node esx2 --compress lzo --storage qnap_nfs


-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir  datanom  net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir  miras  org
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917
--
/usr/games/fortune -es says:
Peter's hungry, time to eat lunch.


pgpqY1Te9m5zj.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Backup scheduler: Perl error

2015-09-26 Thread Dietmar Maurer
> > Please can you post the crontab file?
> > 
> # cat /etc/pve/vzdump.cron 
> # cluster wide vzdump cron schedule
> # Atomatically generated file - do not edit
> 
> PATH="/usr/sbin:/usr/bin:/sbin:/bin"
> 
> 15 5 * * 6   root vzdump 109 114 117 144 128 --mailnotification
> failure --quiet 1 --mode snapshot --mailto x@y.z --node esx1 --compress
> lzo --storage qnap_nfs 

Is there an white space at the end of this line? If so, remove it.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Backup scheduler: Perl error

2015-09-26 Thread Michael Rasmussen
On Sat, 26 Sep 2015 12:52:04 +0200 (CEST)
Dietmar Maurer  wrote:

> 
> But your post includes a white space at the end - why?
> 
MUA or list server line breaks (the line is longer than 72 characters)?

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir  datanom  net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir  miras  org
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917
--
/usr/games/fortune -es says:
Debian Hint #19: If you're interested in building packages from source,
you should consider installing the apt-src package.


pgp1yKjqWcIfa.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Backup scheduler: Perl error

2015-09-26 Thread Dietmar Maurer
> Every time the backup scheduler runs is see this in log for every VM
> that is backed up:
> Use of uninitialized value $cmd[8] in exec
> at /usr/share/perl/5.14/IPC/Open3.pm line 186.

Please can you post the crontab file?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] Backup scheduler: Perl error

2015-09-25 Thread Michael Rasmussen
Hi all,

Every time the backup scheduler runs is see this in log for every VM
that is backed up:
Use of uninitialized value $cmd[8] in exec
at /usr/share/perl/5.14/IPC/Open3.pm line 186.

proxmox-ve-2.6.32: 3.4-163 (running kernel: 3.10.0-11-pve)
pve-manager: 3.4-11 (running version: 3.4-11/6502936f)
pve-kernel-3.10.0-10-pve: 3.10.0-34
pve-kernel-2.6.32-41-pve: 2.6.32-163
pve-kernel-3.10.0-11-pve: 3.10.0-36
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-3
pve-cluster: 3.0-19
qemu-server: 3.4-6
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-33
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-11
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1


-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir  datanom  net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir  miras  org
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917
--
/usr/games/fortune -es says:
And that's the way it is...
-- Walter Cronkite


pgp8Kfvh1c6Iq.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup : add pigz compressor

2015-07-13 Thread Emmanuel Kasper
07/11/2015 09:25 AM, Cesar Peschiera wrote:
 Yes, that makes sense to me.
 
 Or maybe the PVE GUI also can has a third option for use pgzip, and the
 user also can select the amount of cores to used, so in this case, also
 maybe will be better add a message of caution that say the use of many
 cores can slowing down his VMs. At least for me, it would be fantastic.

I might cast a different opinion here:

I think when we add new buttons / fields in the UI, we should evaluate
how big is the use case.

I don't think we want the PVE gui to end up like this ;)
http://i.imgur.com/WmjtJ.png

It is maybe better to find out what is the best default option in
general and stick to that (for example, preselecting 'nocache' as the
default kvm cache, or having only the HTML5 console proposed by default
are good things IMHO)

There has been some Usability research on that topic
http://uxmyths.com/post/712569752/myth-more-choices-and-features-result-in-higher-satisfac

Emmanuel









___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup : add pigz compressor

2015-07-11 Thread Dmitry Petuhov
Maybe run pigz with nice, always with maximum number of cores? So it will 
always run at max speed while not bothering main processes.

11.07.2015 10:25, Cesar Peschiera пишет:
 Yes, that makes sense to me.

 Or maybe the PVE GUI also can has a third option for use pgzip, and the user 
 also can select the amount of cores to used, so in this case, also maybe will 
 be better add a message of caution that say the use of many cores can 
 slowing down his VMs. At least for me, it would be fantastic.

 - Original Message - From: Dietmar Maurer diet...@proxmox.com
 To: Eric Blevins ericlb...@gmail.com
 Cc: pve-u...@pve.proxmox.com; pve-devel@pve.proxmox.com
 Sent: Saturday, July 11, 2015 12:27 AM
 Subject: Re: [pve-devel] backup : add pigz compressor


 You could even make it so using pigz requires a setting in
 vzdump.conf. So in GUI you still can only select gzip or lzop.
 If set to gzip and vzdump.conf has pigz:on then pigz is used instead of 
 gzip.
 Most novice users are only going to use the GUI, this would reduce the
 likelyhood of them turning on pigz and then complaiing about their
 decision.

 Yes, that makes sense to me. Someone want to write a patch?

 ___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 

 ___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup : add pigz compressor

2015-07-11 Thread Cesar Peschiera

Yes, that makes sense to me.


Or maybe the PVE GUI also can has a third option for use pgzip, and the user 
also can select the amount of cores to used, so in this case, also maybe 
will be better add a message of caution that say the use of many cores can 
slowing down his VMs. At least for me, it would be fantastic.


- Original Message - 
From: Dietmar Maurer diet...@proxmox.com

To: Eric Blevins ericlb...@gmail.com
Cc: pve-u...@pve.proxmox.com; pve-devel@pve.proxmox.com
Sent: Saturday, July 11, 2015 12:27 AM
Subject: Re: [pve-devel] backup : add pigz compressor



You could even make it so using pigz requires a setting in
vzdump.conf. So in GUI you still can only select gzip or lzop.
If set to gzip and vzdump.conf has pigz:on then pigz is used instead of 
gzip.

Most novice users are only going to use the GUI, this would reduce the
likelyhood of them turning on pigz and then complaiing about their
decision.


Yes, that makes sense to me. Someone want to write a patch?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup : add pigz compressor

2015-07-10 Thread Michael Rasmussen
Problem with pigz is that it bails out on missing symbolic links. Eg. follow 
symlinks is on and cannot be disabled.

On July 10, 2015 6:51:21 AM CEST, Alexandre DERUMIER aderum...@odiso.com 
wrote:
Hi,

user from pve-user mailing report than using pigz (multithread gzip),
improve a lot speed of compression backup.

Maybe could we use add it ? (or maybe replace gzip ?)


- Mail original -
De: Chidester, Bryce bryce.chides...@calyptix.com
À: proxmoxve pve-u...@pve.proxmox.com
Envoyé: Jeudi 9 Juillet 2015 20:30:30
Objet: Re: [PVE-User] slow backup

While this doesn't directly address the issue you're experiencing, it 
may help slightly. 
In my Proxmox deployments, I've replaced /bin/gzip with pigz ( 
http://zlib.net/pigz/ apt-get install pigz ; ln -sfb /usr/bin/pigz 
/bin/gzip). pigz is a multi-threaded gzip, and it significantly 
decreased our backup times by speeding up the compression. We saw 
backup times drop to just 10-50% of what they took running gzip. These 
big, beefy virtualisation servers have all these CPUs+cores, so why 
limit gzip to just one? 

-- 

Bryce Chidester 
Director of Systems Engineering 
Calyptix Security | Simply Powerful Network Security. 

www.calyptix.com 


On Sun, 2015-07-05 at 21:19 +0200, Tonči Stipičević wrote: 
 Hello to all 
 
 I'm using 4xIntelAtom CPU 2558 @2,4GHz on the host 
 a backup storages are (1G) freenas nfs and local sata disk 
 
 So classic backup (vzdump) of 32G image takes about 1h ... the same 
 result to the nfs storage and to local sata disk 
 I tried changing bwlimit in vzdump.conf the result is allways the 
 same slow 
 
 But Clone VM takes less then 10 min 
 
 manual lzop backup of this image (nfs storage or sata disk) takes 
 less than 10' too ... tar runs same fast 
 
 So we cannot say tha CPU is not capable of doing archiving with 
 commpression 
 
 Why is backup so slow and what can I do to optimize it? 
 
 thank you very much in advance and 
 BR 
 
 Tonci 
 ___ 
 pve-user mailing list 
 pve-u...@pve.proxmox.com 
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 
___ 
pve-user mailing list 
pve-u...@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

-- 
Sent from my Android phone with K-9 Mail. Please excuse my brevity.



This mail was virus scanned and spam checked before delivery.
This mail is also DKIM signed. See header dkim-signature.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup : add pigz compressor

2015-07-10 Thread Eric Blevins
 But pigz needs all you CPU power do be that fast! If you have running VMs,
 it will be as slow as normal gzip, or it will massively slow down the VMs.
 I am really afraid this will trigger many support calls ...

Make the default number of pigz CPUs 2, this should not have much of a
CPU impact even on servers with small number of cores.
Those of us who have dozens of cores can edit vzdump.conf and set an
option to use more cores for pigz.

You could even make it so using pigz requires a setting in
vzdump.conf. So in GUI you still can only select gzip or lzop.
If set to gzip and vzdump.conf has pigz:on then pigz is used instead of gzip.
Most novice users are only going to use the GUI, this would reduce the
likelyhood of them turning on pigz and then complaiing about their
decision.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup : add pigz compressor

2015-07-10 Thread Dietmar Maurer
 You could even make it so using pigz requires a setting in
 vzdump.conf. So in GUI you still can only select gzip or lzop.
 If set to gzip and vzdump.conf has pigz:on then pigz is used instead of gzip.
 Most novice users are only going to use the GUI, this would reduce the
 likelyhood of them turning on pigz and then complaiing about their
 decision.

Yes, that makes sense to me. Someone want to write a patch?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup : add pigz compressor

2015-07-09 Thread Alexandre DERUMIER
We already have lzop, which is as fast as pgzip and uses less resources.
So what is the advantage of pgzip? 
I think better compression than lzop (but slower), but still faster than 
classic gzip.


IMHO a parallel lzop would be nice.

yeah. don't have find any implementation currently.


- Mail original -
De: dietmar diet...@proxmox.com
À: aderumier aderum...@odiso.com, pve-devel pve-devel@pve.proxmox.com
Cc: proxmoxve pve-u...@pve.proxmox.com, Chidester, Bryce 
bryce.chides...@calyptix.com
Envoyé: Vendredi 10 Juillet 2015 07:05:07
Objet: Re: [pve-devel] backup : add pigz compressor

 user from pve-user mailing report than using pigz (multithread gzip), 
 improve a lot speed of compression backup. 
 
 Maybe could we use add it ? (or maybe replace gzip ?) 

We already have lzop, which is as fast as pgzip and uses less resources. 
So what is the advantage of pgzip? IMHO a parallel lzop would be nice. 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup : add pigz compressor

2015-07-09 Thread Dietmar Maurer
 user from pve-user mailing report than using pigz (multithread gzip),
 improve a lot speed of compression backup.
 
 Maybe could we use add it ? (or maybe replace gzip ?)

We already have lzop, which is as fast as pgzip and uses less resources.
So what is the advantage of pgzip? IMHO a parallel lzop would be nice.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] backup : add pigz compressor

2015-07-09 Thread Alexandre DERUMIER
Hi,

user from pve-user mailing report than using pigz (multithread gzip),
improve a lot speed of compression backup.

Maybe could we use add it ? (or maybe replace gzip ?)


- Mail original -
De: Chidester, Bryce bryce.chides...@calyptix.com
À: proxmoxve pve-u...@pve.proxmox.com
Envoyé: Jeudi 9 Juillet 2015 20:30:30
Objet: Re: [PVE-User] slow backup

While this doesn't directly address the issue you're experiencing, it 
may help slightly. 
In my Proxmox deployments, I've replaced /bin/gzip with pigz ( 
http://zlib.net/pigz/ apt-get install pigz ; ln -sfb /usr/bin/pigz 
/bin/gzip). pigz is a multi-threaded gzip, and it significantly 
decreased our backup times by speeding up the compression. We saw 
backup times drop to just 10-50% of what they took running gzip. These 
big, beefy virtualisation servers have all these CPUs+cores, so why 
limit gzip to just one? 

-- 

Bryce Chidester 
Director of Systems Engineering 
Calyptix Security | Simply Powerful Network Security. 

www.calyptix.com 


On Sun, 2015-07-05 at 21:19 +0200, Tonči Stipičević wrote: 
 Hello to all 
 
 I'm using 4xIntelAtom CPU 2558 @2,4GHz on the host 
 a backup storages are (1G) freenas nfs and local sata disk 
 
 So classic backup (vzdump) of 32G image takes about 1h ... the same 
 result to the nfs storage and to local sata disk 
 I tried changing bwlimit in vzdump.conf the result is allways the 
 same slow 
 
 But Clone VM takes less then 10 min 
 
 manual lzop backup of this image (nfs storage or sata disk) takes 
 less than 10' too ... tar runs same fast 
 
 So we cannot say tha CPU is not capable of doing archiving with 
 commpression 
 
 Why is backup so slow and what can I do to optimize it? 
 
 thank you very much in advance and 
 BR 
 
 Tonci 
 ___ 
 pve-user mailing list 
 pve-u...@pve.proxmox.com 
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 
___ 
pve-user mailing list 
pve-u...@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup lock

2014-12-24 Thread Dietmar Maurer

  I can confirm that bug.
 
 
 TBH, I've been seeing this for months, I presumed it was well known.
 
 Happens when a backup fails as well - last power outage we had was in the 
 middle of backups and two VM's refused to start on node start because they 
 were locked

Well, this is a different problem.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup lock

2014-12-24 Thread Lindsay Mathieson
On Wed, 24 Dec 2014 09:29:18 AM Dietmar Maurer wrote:
  Happens when a backup fails as well - last power outage we had was in the 
  middle of backups and two VM's refused to start on node start because
  they 
  were locked
 
 Well, this is a different problem.


is it? vm is locked because backup failed or was cancelled - looks similar.
-- 
Lindsay

signature.asc
Description: This is a digitally signed message part.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup lock

2014-12-24 Thread Dietmar Maurer
 On Wed, 24 Dec 2014 09:29:18 AM Dietmar Maurer wrote:
   Happens when a backup fails as well - last power outage we had was in the 
   middle of backups and two VM's refused to start on node start because
   they 
   were locked
  
  Well, this is a different problem.
 
 
 is it? vm is locked because backup failed or was cancelled - looks similar.

On power failure, it is impossible to remove the lock, because we are already
dead.
I guess we should remove that lock when we start a node.

But this bug is about removing the lock after receiving a signal (TERM).

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup lock

2014-12-24 Thread Lindsay Mathieson
On Wed, 24 Dec 2014 11:57:52 AM Dietmar Maurer wrote:
 On power failure, it is impossible to remove the lock, because we are
 already dead.
 I guess we should remove that lock when we start a node.
 
 But this bug is about removing the lock after receiving a signal (TERM).

Ah yes, I see your point.

We have a UPS and I thought it signalled a proper shutdown, but  I'm not sure 
- I'll try a test.
-- 
Lindsay

signature.asc
Description: This is a digitally signed message part.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup lock

2014-12-24 Thread Lindsay Mathieson
On Wed, 24 Dec 2014 01:53:40 PM Dietmar Maurer wrote:
  On Wed, 24 Dec 2014 11:57:52 AM Dietmar Maurer wrote:
   On power failure, it is impossible to remove the lock, because we are
   already dead.
   I guess we should remove that lock when we start a node.
  
  
  
   But this bug is about removing the lock after receiving a signal (TERM).
 
 
 
  Ah yes, I see your point.
 
 
 
  We have a UPS and I thought it signalled a proper shutdown, but  I'm not
  sure  - I'll try a test.

 Ah, and this is one more case  I guess we should terminate all backup
 tasks at shutdown (what about other tasks like storage migrate?


I suspect all tasks should be cleanly killed if possible


 AFAIK this is currently not implemented.

Yup :) Just finished my test:


- Scheduled a backup on a node with a autostart VM,

- rebooted the node when it was backing up the VM

- VM shutdown failed with a lock error. That is quite a problem, as the VM is 
then killed.

- Autostart failed with a lock error.


- Manual start of the VM fails with a locked error.

- After unlock, all is ok.


A clean terminate of the backup and unlock of the currently backing up VM is 
needed.

cheers,

--
Lindsay Mathieson


signature.asc
Description: This is a digitally signed message part.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup lock

2014-12-23 Thread lyt_yudi

 在 2014年12月23日,下午3:18,Gökalp Çakıcı gokalp.cak...@pusula.net.tr 写道:
 
 You can cancel the job via web interface while doing backup job. And i try 
 the scenario and reproduce the situation. When you use the stop button and 
 try to start the vm it says locked. First you unlock it from the command line 
 with qm unlock vmid and then you can start the vm.

yes, like do this

# pveversion -v
proxmox-ve-2.6.32: 3.3-139 (running kernel: 3.10.0-5-pve)
pve-manager: 3.3-7 (running version: 3.3-7/bc628e80)
pve-kernel-3.10.0-5-pve: 3.10.0-20
pve-kernel-2.6.32-33-pve: 2.6.32-138
pve-kernel-2.6.32-34-pve: 2.6.32-139
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-2
pve-cluster: 3.0-15
qemu-server: 3.3-6
pve-firmware: 1.1-3
libpve-common-perl: 3.0-20
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-26
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-5
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

smime.p7s
Description: S/MIME cryptographic signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup lock

2014-12-23 Thread lyt_yudi
在 2014年12月23日,下午11:35,Dietmar Maurer diet...@proxmox.com 写道:
 
 
 You can cancel the job via web interface while doing backup job. And i try
 the scenario and reproduce the situation. When you use the stop button and
 try to start the vm it says locked. First you unlock it from the command
 line with qm unlock vmid and then you can start the vm.
 
 
 I can confirm that bug.
 

Thanks you.

smime.p7s
Description: S/MIME cryptographic signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup lock

2014-12-23 Thread Lindsay Mathieson
On Tue, 23 Dec 2014 04:35:09 PM Dietmar Maurer wrote:
   You can cancel the job via web interface while doing backup job. And i
   try
   the scenario and reproduce the situation. When you use the stop button
   and
   try to start the vm it says locked. First you unlock it from the command
   line with qm unlock vmid and then you can start the vm.
 
  
 
 I can confirm that bug.


TBH, I've been seeing this for months, I presumed it was well known.

Happens when a backup fails as well - last power outage we had was in the 
middle of backups and two VM's refused to start on node start because they 
were locked
-- 
Lindsay

signature.asc
Description: This is a digitally signed message part.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] backup lock

2014-12-22 Thread lyt_yudi
hi,all

INFO: starting new backup job: vzdump 121 --remove 0 --mode snapshot --compress 
lzo --storage local --node t3
INFO: Starting Backup of VM 121 (qemu)
INFO: status = running
INFO: VM is locked (backup)
ERROR: Backup of VM 121 failed - command 'qm set 121 --lock backup' failed: 
exit code 25
INFO: Backup job finished with errors
TASK ERROR: job errors

this case must be in cli: qm unlock 121 , for unlock 121

maybe this is a bug :(

Can to increase this feature in the API? or unlock it before new backup job? 


lyt_yudi
lyt_y...@icloud.com





smime.p7s
Description: S/MIME cryptographic signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup lock

2014-12-22 Thread Dietmar Maurer

On 12/23/2014 02:58 AM, lyt_yudi wrote:

hi,all

INFO: starting new backup job: vzdump 121 --remove 0 --mode snapshot --compress 
lzo --storage local --node t3
INFO: Starting Backup of VM 121 (qemu)
INFO: status = running
INFO: VM is locked (backup)
ERROR: Backup of VM 121 failed - command 'qm set 121 --lock backup' failed: 
exit code 25
INFO: Backup job finished with errors
TASK ERROR: job errors

this case must be in cli: qm unlock 121 , for unlock 121

maybe this is a bug :(

Can to increase this feature in the API? or unlock it before new backup job?


This indicates that something is wrong, maybe a crashed backup job?
You should check if there is an old backup task still running before 
using unlock.



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup lock

2014-12-22 Thread lyt_yudi
sorry, forgot send to pve-devel :(

 在 2014年12月23日,下午12:59,Dietmar Maurer diet...@proxmox.com 
 mailto:diet...@proxmox.com 写道:
 
 This indicates that something is wrong, maybe a crashed backup job?

no, it’s cancel the backup job by manual, and then start new backup job.

 You should check if there is an old backup task still running before using 
 unlock.


there is no old backup task.

thanks.

smime.p7s
Description: S/MIME cryptographic signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup lock

2014-12-22 Thread lyt_yudi

 在 2014年12月23日,下午1:20,lyt_yudi lyt_y...@icloud.com 写道:
 
 This indicates that something is wrong, maybe a crashed backup job?

INFO: starting new backup job: vzdump 121 --remove 0 --mode snapshot --compress 
lzo --storage local --node t3
INFO: Starting Backup of VM 121 (qemu)
INFO: status = running
INFO: update VM 121: -lock backup
INFO: exclude disk 'virtio1' (backup=no)
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: snapshots found (not included into backup)
INFO: creating archive 
'/var/lib/vz/dump/vzdump-qemu-121-2014_12_23-13_22_52.vma.lzo'
INFO: started backup task 'd7330854-dbb6-4461-aba5-f3b604bcfa34'
INFO: status: 0% (88211456/34359738368), sparse 0% (83898368), duration 3, 29/1 
MB/s
ERROR: interrupted by signal
INFO: aborting backup job

this ERROR is caused by manual cancellation.

maybe this process can be integrated to unlock operation.

thanks.

smime.p7s
Description: S/MIME cryptographic signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup lock

2014-12-22 Thread Wolfgang Link
Can you pleas post me your version of PVE.

pveversion -v

because I can't reproduce it at my machine, not on the gui and not at cli.

how do you cancel the job manually?

Regrades 

Wolfgang

 On December 23, 2014 at 6:36 AM lyt_yudi lyt_y...@icloud.com wrote:
 
 
 
  在 2014年12月23日,下午1:20,lyt_yudi lyt_y...@icloud.com 写道:
  
  This indicates that something is wrong, maybe a crashed backup job?
 
 INFO: starting new backup job: vzdump 121 --remove 0 --mode snapshot
 --compress lzo --storage local --node t3
 INFO: Starting Backup of VM 121 (qemu)
 INFO: status = running
 INFO: update VM 121: -lock backup
 INFO: exclude disk 'virtio1' (backup=no)
 INFO: backup mode: snapshot
 INFO: ionice priority: 7
 INFO: snapshots found (not included into backup)
 INFO: creating archive
 '/var/lib/vz/dump/vzdump-qemu-121-2014_12_23-13_22_52.vma.lzo'
 INFO: started backup task 'd7330854-dbb6-4461-aba5-f3b604bcfa34'
 INFO: status: 0% (88211456/34359738368), sparse 0% (83898368), duration 3,
 29/1 MB/s
 ERROR: interrupted by signal
 INFO: aborting backup job
 
 this ERROR is caused by manual cancellation.
 
 maybe this process can be integrated to unlock operation.
 
 thanks.___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup lock

2014-12-22 Thread Gökalp Çakıcı
You can cancel the job via web interface while doing backup job. And i 
try the scenario and reproduce the situation. When you use the stop 
button and try to start the vm it says locked. First you unlock it from 
the command line with qm unlock vmid and then you can start the vm.


On 12/23/14 9:13 AM, Wolfgang Link wrote:

Can you pleas post me your version of PVE.

pveversion -v

because I can't reproduce it at my machine, not on the gui and not at cli.

how do you cancel the job manually?

Regrades

Wolfgang


On December 23, 2014 at 6:36 AM lyt_yudi lyt_y...@icloud.com wrote:




在 2014年12月23日,下午1:20,lyt_yudi lyt_y...@icloud.com 写道:


This indicates that something is wrong, maybe a crashed backup job?

INFO: starting new backup job: vzdump 121 --remove 0 --mode snapshot
--compress lzo --storage local --node t3
INFO: Starting Backup of VM 121 (qemu)
INFO: status = running
INFO: update VM 121: -lock backup
INFO: exclude disk 'virtio1' (backup=no)
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: snapshots found (not included into backup)
INFO: creating archive
'/var/lib/vz/dump/vzdump-qemu-121-2014_12_23-13_22_52.vma.lzo'
INFO: started backup task 'd7330854-dbb6-4461-aba5-f3b604bcfa34'
INFO: status: 0% (88211456/34359738368), sparse 0% (83898368), duration 3,
29/1 MB/s
ERROR: interrupted by signal
INFO: aborting backup job

this ERROR is caused by manual cancellation.

maybe this process can be integrated to unlock operation.

thanks.___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


--
Gokalp Cakici
Pusula Iletisim Hizmetleri
T: +90-212-2134142
F: +90-212-2135958
http://www.pusula.net.tr
http://www.uZmanPosta.com

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup ceph high iops and slow

2014-12-09 Thread VELARTIS Philipp Dürhammer
HI,

we talked about this some month ago..
does anybody know if it will be improved soon?

So mainly ist a ceph problem. Or also should we communicate with the qemu devs?
making bigger backups now takes way to much time. Making a backup from a node 
wich has about 5tb will take  about 36 hours :-(



-Ursprüngliche Nachricht-
Von: pve-devel [mailto:pve-devel-boun...@pve.proxmox.com] Im Auftrag von Dmitry 
Petuhov
Gesendet: Freitag, 17. Oktober 2014 08:04
An: pve-devel@pve.proxmox.com
Betreff: Re: [pve-devel] backup ceph high iops and slow

16.10.2014 22:33, VELARTIS Philipp Dürhammer пишет:
 Why do backups with ceph make so high iops?
 I get around 600 iops for 40mb/sec which is by the way very slow for a backup.
 When I make a disk clone from local to ceph I get 120mb/sec (which is the 
 network limit from the old proxmox nodes) and only around 100-120 iops which 
 is the normal for a seq read with 120mb/sec...
 Is the vzdump making problems?
It's not vzdump's issue, but qemu's. See 
http://forum.proxmox.com/threads/19341-Ceph-backup

--
С уважением,
   Дмитрий Петухов
   Ведущий системный администратор
   Отдел информационных технологий
   ЗАО Электро-ком
   phone: +79209755610
   email: d.petu...@electro-com.ru

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] Backup-Errors - Bug?

2014-11-14 Thread Detlef Bracker
Dear,

to backup container to nfs in suspend mode, I get many errors

rsync chown  operation not permitted

and the backup end with errors

My question is, ist this a nfs problem or a problem of proxmox?

I have longer time before create the NFS via the GUI and proxmox has
create a mount link
under /mnt/pve/backup9 with this rights:

drwxr-xr-x 3 libuuid users4 Apr 11  2014 backup9

I used before the backup only for VM withouth problems, but now I will
use too in moment test for containers and a backup
will not work, same not with snapshot option for containers. Comes
diferent errors:

INFO: creating lvm snapshot of /dev/mapper/pve-data
('/dev/pve/vzsnap-ns-0')
INFO: Volume group pve has insufficient free space (0 extents): 256
required

Regards

Detlef




signature.asc
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] Backup hangs / no notify if second backup job can't start

2014-10-22 Thread Stefan Priebe - Profihost AG
Hi,

i had the following situation several times in the past.

1.) Backup Job started - hangs forever on a VM looping in boot mode as
there was not an OS installed

2.) At the next day the backup job can't start as the one from before is
still running.

I think it would be very important to get an e-mail that the job can't
start as another job is still running. Otherwise you think that backup
works fine but it does not ...

Greets,
Stefan
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Backup hangs / no notify if second backup job can't start

2014-10-22 Thread Dietmar Maurer
 i had the following situation several times in the past.
 
 1.) Backup Job started - hangs forever on a VM looping in boot mode as there
 was not an OS installed
 
 2.) At the next day the backup job can't start as the one from before is still
 running.
 
 I think it would be very important to get an e-mail that the job can't start 
 as
 another job is still running. Otherwise you think that backup works fine but 
 it
 does not ...

Would be great if you can write a patch for that. Is there already a bug report 
for that?

- Dietmar

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Backup hangs / no notify if second backup job can't start

2014-10-22 Thread Stefan Priebe - Profihost AG

Am 22.10.2014 um 09:16 schrieb Dietmar Maurer:
 i had the following situation several times in the past.

 1.) Backup Job started - hangs forever on a VM looping in boot mode as 
 there
 was not an OS installed

 2.) At the next day the backup job can't start as the one from before is 
 still
 running.

 I think it would be very important to get an e-mail that the job can't start 
 as
 another job is still running. Otherwise you think that backup works fine but 
 it
 does not ...
 
 Would be great if you can write a patch for that. Is there already a bug 
 report for that?

Will do so. Can you point me to the part where vzdump checks whether
there is a process already running? I was not able to find that one.

Stefan

 - Dietmar
 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Backup hangs / no notify if second backup job can't start

2014-10-22 Thread Dietmar Maurer
 Will do so. Can you point me to the part where vzdump checks whether there is
 a process already running? I was not able to find that one.

In PVE/API2/VZDump.pm, search for getlock().

getlock is defined in PVE/VZDump.pm

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup

2014-10-20 Thread Lindsay Mathieson
On 20 October 2014 15:27, Dmitry Petuhov mityapetu...@gmail.com wrote:

Yes, slow NFS servers/network is known to cause failing backup with VM
 crush. As workaround, you can try enable GZIP compression. It shall slow
 down NFS writes enough to let NAS server do its job. If it's not enough
 then try set bwlimit in /etc/vzdump.conf.




Yes, that worked! and not much slower either. This will keep us going while
I decide on alternatives.

Thanks Dmtry,


-- 
Lindsay
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup

2014-10-20 Thread Lindsay Mathieson
On 20 October 2014 14:15, Cesar Peschiera br...@click.com.py wrote:

 Hi Lindsay

 Maybe you'd better have a PVE on a PC as Backup Server instead of a NAS.
 Why i believe that it will be better?
 For four reasons:
 1) You will can manually restore the backups that were completed
 successfully on this same PC that also is running PVE and test it for be
 sure that your files of backups are in perfect status for his restauration.
 2)  You will not need to use your real servers doing such tests, and you
 will avoid performance degradation.
 3) If a hardware component is decomposed, you only will need change the
 part that is decomposed.
 4) If a PVE real server decomposes, and you don't have HA enabled, your
 PC of backup will can help you starting the VMs that are necessary in this
 same computer.



Thats an interesting thought Cesar, I'll look into that. Would be very
useful to have a test server.

OTOH, the attraction of a NAS is the built in  features - web gui, sync to
external disk, notification of results via email etc. With a PVE server I'd
have to script and test it all myself.



 By other hand, for a company, i will be testing this scenery:

 2 PC of Backup
 -
 - The two PC of Backups will have the same configuration
 - Mainboard Asus P8H77 m pro (workstation)
 - 16 GB RAM
 - OS = PVE (from his ISO installer)
 - NFS service for use in the Backups
 - 1 NIC Intel dual port 10 Gb/s with bonding active-backup (for use
 exclusive of the backups) connected to the LAN of the backups
 - 2 NICs Intel single port with bonding LACP connected to the LAN
 company (for his management)


Why two bonded nics for the management connection?


thanks,

-- 
Lindsay
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup

2014-10-20 Thread Cesar Peschiera
the attraction of a NAS is the built in  features - web gui, sync to external 
disk, notification of results via email etc.
PVE have many of these features (sync to external disk can run automatically 
with a script created by you and using crontab), but i don't know why you want 
to have this feature?.

 With a PVE server I'd have to script and test it all myself.
For a side, with a NAS, ¿how can you test the backups of VMs and without any 
kind of human interaction? ... and remember that the backups are created in a 
vzdump format, so i am insecure that your NAS can read this format kind of file.

Of other hand:
In my setup, the Backups are programmed daily by the GUI PVE of the real 
Servers, obviously there is no human interaction for do these tasks, and 
remember also that PVE send the mails automatically making his reports.
The idea of do a restore and starting the VM in the Backup PC manually is only 
for do a check manually and be sure that there was no some sort of strange 
problem.

Making a comparison with the commercial leader product of Backups of 
hypervisors (for VMware and Hyper-V):
Veeam Backup do this tasks automatically in chronological order:
A) Make a backups automatically of the VMs assigned
B) Restore automatically the VMs in a isolated environment
C) Check automatically that the VMs can booting and starting
D) Removes automatically all traces of actions for declaration
E) Send by email automatically a complete report of his actions

So, as the Backup system of PVE don't have the task of tests of backups 
performed, i believe that it i can do manually when i want and with the VMs 
that i want (only using the Backup PC obviously).

Personally, i don't believe that if the email of PVE tell us that backup are 
OK, in the practice occur otherwise, but as people say  one  never know 
until test it

Why two bonded nics for the management connection?
Again, for four reasons:
1) It is only for caution, if a cable, or a NIC, or a physical port of the 
Switch, or the same Switch break down, I will always have redundancy, and i 
will be able to maintain always the network connection live, remember that my 
Switches are in Stack mode (ie i have HA in all the Switches), so each NIC of 
my PC Backup and my real Servers will be connected to different Switches that 
also are in HA.

2) By other hand, for the Backup Servers and his connection to the LAN of the 
company, Intel have NICs of 1 Gb/s cheap by amazon.com (model I-210), the model 
is in this link:
http://ark.intel.com/es/products/68668/Intel-Ethernet-Server-Adapter-I210-T1

Or more cheap with this model (Intel Gigabit CT Desktop Adapter), that also 
support many kinds of network bondings:
http://ark.intel.com/es/products/50395/Intel-Gigabit-CT-Desktop-Adapter

And as i don't need performance for this kind of connection, any network card 
that work well will be good,  inclusive, speed of 100 Mb/s will work well, 
these PCs will not be in the PVE cluster.

3) The driver version of these Intel NICs are updated periodically in the 
kernel by PVE team (a preoccupation less for me)

4) By other hand, for production enviroments, i have the good habit of install 
all the network connections with some type of bonding mode, as applicable, 
setup the most convenient (i always think that if i have HA in my servers, 
why not have redundancy in his network connections?).

Waiting that these advices have been helpful, i say see you soon

Best regards
Cesar Peschiera
  - Original Message - 
  From: Lindsay Mathieson 
  To: Cesar Peschiera 
  Cc: pve-devel@pve.proxmox.com 
  Sent: Monday, October 20, 2014 11:59 PM
  Subject: Re: [pve-devel] backup






  On 20 October 2014 14:15, Cesar Peschiera br...@click.com.py wrote:

Hi Lindsay

Maybe you'd better have a PVE on a PC as Backup Server instead of a NAS.
Why i believe that it will be better?
For four reasons:
1) You will can manually restore the backups that were completed 
successfully on this same PC that also is running PVE and test it for be sure 
that your files of backups are in perfect status for his restauration.
2)  You will not need to use your real servers doing such tests, and you 
will avoid performance degradation.
3) If a hardware component is decomposed, you only will need change the 
part that is decomposed.
4) If a PVE real server decomposes, and you don't have HA enabled, your 
PC of backup will can help you starting the VMs that are necessary in this same 
computer.




  Thats an interesting thought Cesar, I'll look into that. Would be very useful 
to have a test server.


  OTOH, the attraction of a NAS is the built in  features - web gui, sync to 
external disk, notification of results via email etc. With a PVE server I'd 
have to script and test it all myself.



By other hand, for a company, i will be testing this scenery:

2 PC of Backup
-
- The two PC of Backups will have the same configuration

Re: [pve-devel] backup

2014-10-19 Thread Dietmar Maurer
 On Sat, 18 Oct 2014 10:56:18 AM Dietmar Maurer wrote:
   Maybe a setting in the backup gui when checked prevents running
   parallel backups could solve this issue?
 
  It is quite easy to setup a separate job for each node.
 
 
 Not if you need to exclude some vm's, but migrate them regularly for load
 balancing.

If there is a bug somewhere, we first need a reliable test case for it, so that
we can reproduce it.

After that, we can try to find a solution for the problem.

Trying to implement a solution for an unknown problem is the wrong way.




___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup ceph high iops and slow

2014-10-19 Thread Alexandre DERUMIER
about ceph new read-ahead feature :

https://github.com/ceph/ceph/commit/a9f302da08ab96128b28d85e2f566ad0f2ba2f30

+
+
+Read-ahead Settings
+===
+
+.. versionadded:: 0.86
+
+RBD supports read-ahead/prefetching to optimize small, sequential reads.
+This should normally be handled by the guest OS in the case of a VM,
+but boot loaders may not issue efficient reads.
+Read-ahead is automatically disabled if caching is disabled.
+
+
+``rbd readahead trigger requests``
+
+:Description: Number of sequential read requests necessary to trigger 
read-ahead.
+:Type: Integer
+:Required: No
+:Default: ``10``
+
+
+``rbd readahead max bytes``
+
+:Description: Maximum size of a read-ahead request.  If zero, read-ahead is 
disabled.
+:Type: 64-bit Integer
+:Required: No
+:Default: ``512 KiB``
+
+
+``rbd readahead disable after bytes``
+
+:Description: After this many bytes have been read from an RBD image, 
read-ahead is disabled for that image until it is closed.  This allows the 
guest OS to take over read-ahead once it is booted.  If zero, read-ahead stays 
enabled.
+:Type: 64-bit Integer
+:Required: No
+:Default: ``50 MiB``




So, It's important to setup : 

rbd readahead disable after bytes=0  for proxmox backups





- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com, VELARTIS Philipp Dürhammer 
p.duerham...@velartis.at, Dmitry Petuhov mityapetu...@gmail.com 
Envoyé: Samedi 18 Octobre 2014 09:47:08 
Objet: RE: [pve-devel] backup ceph high iops and slow 

 We read 64K blocks, so 
 
 Don't for for backup, but drive-mirror have a granulary option to change 
 the 
 block size: 
 
 # @granularity: #optional granularity of the dirty bitmap, default is 64K. 
 # Must be a power of 2 between 512 and 64M. 

Although it would be much easier it read-ahead solve the problem ;-) 

Anybody have newest ceph with those read-ahead patches for testing? 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup ceph high iops and slow

2014-10-19 Thread Dietmar Maurer
 +RBD supports read-ahead/prefetching to optimize small, sequential reads.
 +This should normally be handled by the guest OS in the case of a VM,

How should we do read-ahead inside qemu? manually?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup ceph high iops and slow

2014-10-19 Thread Alexandre DERUMIER
How should we do read-ahead inside qemu? manually?

This is managed by the linux kernel automaticaly

/sys/class/block/sda/queue/read_ahead_kb



Also about ceph performance,

another problem is that qemu is single threaded for block access. And 
ceph/librbd cpu usage is huge, so it's possible to be 
cpu bound on 1 core.

I need to send a patch, but with virtio-scsi it's possible to do multi-queue, 
to scale on multi-cores, with num_queue param in virtio-scsi device.
I'm not sure It's helping for the blocks jobs, but It's really helping the 
guest ios.




Here some bench results on coming giant ceph release  with differents 
tuning:(0.86)

8 cores (CPU E5-2603 v2 @ 1.80GHz): 

15000 iops 4K read :  auth_client: cephx  rbd_cache: on  (50% cpu) 
25000 iops 4K read : auth_client: cephx  rbd_cache: off (100% cpu - seem to 
have read lock with rbd_cache=true)

4 iops 4K read : auth_client: none  rbd_cache: off (100% cpu - cephx auth 
is really cpu intensive)


And with 1 core, I can get only 7000 iops. (same inside the vm with virtio-blk)

- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com, VELARTIS Philipp Dürhammer 
p.duerham...@velartis.at, Dmitry Petuhov mityapetu...@gmail.com 
Envoyé: Dimanche 19 Octobre 2014 18:07:30 
Objet: RE: [pve-devel] backup ceph high iops and slow 

 +RBD supports read-ahead/prefetching to optimize small, sequential reads. 
 +This should normally be handled by the guest OS in the case of a VM, 

How should we do read-ahead inside qemu? manually? 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup

2014-10-19 Thread Lindsay Mathieson
On Sun, 19 Oct 2014 07:18:34 AM Dietmar Maurer wrote:
  On Sat, 18 Oct 2014 10:56:18 AM Dietmar Maurer wrote:
Maybe a setting in the backup gui when checked prevents running
parallel backups could solve this issue?
   
   It is quite easy to setup a separate job for each node.
 
  
  
 
  Not if you need to exclude some vm's, but migrate them regularly for load
  balancing.
 
 If there is a bug somewhere, we first need a reliable test case for it, so
 that we can reproduce it.
 
 After that, we can try to find a solution for the problem.
 
 Trying to implement a solution for an unknown problem is the wrong way.


True and in my case I wonder if the problem is my NAS.

Backups to locally attached USB drive are no problem, but I've always had 
problems backing up to my NAS over GB ethernet, using NFS or SMB.

I'd often get 1 or 2 VM's failing with disk write errors, but since I updated 
the firmware on my NAS (shell shock) and started using Kernel 3.10 on proxmox 
nearly all of them are failing.

NAS: QNAP TS-420

Switch: TP-Link TL-SG1024D
  http://www.tp-link.com/en/products/details/?model=TL-SG1024D

Connection is two bonded 1Gb Links.

The nas is also my shared storage.

At this stage, if people think the nas is the p[roblem, I'd be upgrading it.
-- 
Lindsay

signature.asc
Description: This is a digitally signed message part.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup

2014-10-19 Thread Dmitry Petuhov

20.10.2014 02:20, Lindsay Mathieson пишет:
True and in my case I wonder if the problem is my NAS. Backups to 
locally attached USB drive are no problem, but I've always had 
problems backing up to my NAS over GB ethernet, using NFS or SMB. I'd 
often get 1 or 2 VM's failing with disk write errors, but since I 
updated the firmware on my NAS (shell shock) and started using Kernel 
3.10 on proxmox nearly all of them are failing. NAS: QNAP TS-420 
Switch: TP-Link TL-SG1024D 
http://www.tp-link.com/en/products/details/?model=TL-SG1024D 
Connection is two bonded 1Gb Links. The nas is also my shared storage. 
At this stage, if people think the nas is the p[roblem, I'd be 
upgrading it.
Yes, slow NFS servers/network is known to cause failing backup with VM 
crush. As workaround, you can try enable GZIP compression. It shall slow 
down NFS writes enough to let NAS server do its job. If it's not enough 
then try set bwlimit in /etc/vzdump.conf.


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup ceph high iops and slow

2014-10-18 Thread Dietmar Maurer
 We read 64K blocks, so
 
 Don't for for backup, but drive-mirror have a granulary option to change the
 block size:
 
  # @granularity: #optional granularity of the dirty bitmap, default is 64K.
  #   Must be a power of 2 between 512 and 64M.

Although it would be much easier it read-ahead solve the problem ;-)

Anybody have newest ceph with those read-ahead patches for testing?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] backup

2014-10-18 Thread Michael Rasmussen
Hi all,

I just discovered that when Proxmox initiates the configured automatic
backup it runs backups from each node in the cluster in parallel. I
wonder whether this could be the cause for users now and then see some
of the backups fails given the backup storage is to slow storage?

Maybe a setting in the backup gui when checked prevents running
parallel backups could solve this issue?

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael at rasmussen dot cc
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E
mir at datanom dot net
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C
mir at miras dot org
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917
--
/usr/games/fortune -es says:
f u cn rd ths, u cn gt a gd jb n cmptr prgrmmng.


pgpDfbJvZlFpI.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup

2014-10-18 Thread Dietmar Maurer

 I just discovered that when Proxmox initiates the configured automatic backup 
 it
 runs backups from each node in the cluster in parallel. I wonder whether this
 could be the cause for users now and then see some of the backups fails given
 the backup storage is to slow storage?
 
 Maybe a setting in the backup gui when checked prevents running parallel
 backups could solve this issue?

It is quite easy to setup a separate job for each node.


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup

2014-10-18 Thread Michael Rasmussen
On Sat, 18 Oct 2014 10:56:18 +
Dietmar Maurer diet...@proxmox.com wrote:

 
 It is quite easy to setup a separate job for each node.
 
I know, but maybe users are not aware of this complication. If it can
remove a lot of noise on the forum from people with failing backups I
think it is worth implementing.

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael at rasmussen dot cc
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E
mir at datanom dot net
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C
mir at miras dot org
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917
--
/usr/games/fortune -es says:
I might have gone to West Point, but I was too proud to speak to a
congressman. -- Will Rogers


pgpeudVXyRP2C.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup

2014-10-18 Thread Dietmar Maurer

  It is quite easy to setup a separate job for each node.
 
 I know, but maybe users are not aware of this complication.

 If it can remove a
 lot of noise on the forum from people with failing backups I think it is 
 worth
 implementing.

1.) I don't think this is the reason for the problems.
2.) it work perfectly for 99% of people
3.) There is a workaround
4.) what you suggest is difficult to implement


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup

2014-10-18 Thread Gilberto Nunes
Well...

I have just one node here, because the other server doesn't arrive yet, but
I already create the cluster...
And my backup task are extremaly slow!
It a file about 23 GB, that take almost 5 hours.
Ok! I ran backup over LVM-on-top-iSCSI

VMIDNAMESTATUSTIMESIZEFILENAME 103Windows2012OK04:53:4122.52GB
/BKPVMS/dump/vzdump-qemu-103-2014_10_17-18_30_02.vma.lzo TOTAL04:53:41
22.52GB


On the other hand, this is another server , and a file with about 80 GB of
size, take almost 45 minutes!


VMIDNAMESTATUSTIMESIZEFILENAME 100Win2008OK00:44:5178.98GB
/STORAGE03/dump/vzdump-qemu-100-2014_10_17-23_00_02.vma.lzo TOTAL00:44:51
78.98GB

With sure something is wrong, but I can believe that paralel task has
something to do with this, since I have just one node...

Any advice will help?

2014-10-18 12:21 GMT-03:00 Dietmar Maurer diet...@proxmox.com:


   It is quite easy to setup a separate job for each node.
  
  I know, but maybe users are not aware of this complication.

  If it can remove a
  lot of noise on the forum from people with failing backups I think it
 is worth
  implementing.

 1.) I don't think this is the reason for the problems.
 2.) it work perfectly for 99% of people
 3.) There is a workaround
 4.) what you suggest is difficult to implement


 ___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel




-- 
--

Gilberto Ferreira
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup

2014-10-18 Thread Michael Rasmussen
On Sat, 18 Oct 2014 13:03:57 -0300
Gilberto Nunes gilberto.nune...@gmail.com wrote:

 Well...
 
 I have just one node here, because the other server doesn't arrive yet, but
 I already create the cluster...
 And my backup task are extremaly slow!
 It a file about 23 GB, that take almost 5 hours.
 Ok! I ran backup over LVM-on-top-iSCSI
 
 VMIDNAMESTATUSTIMESIZEFILENAME 103Windows2012OK04:53:4122.52GB
 /BKPVMS/dump/vzdump-qemu-103-2014_10_17-18_30_02.vma.lzo TOTAL04:53:41
 22.52GB
 
 
 On the other hand, this is another server , and a file with about 80 GB of
 size, take almost 45 minutes!
 
 
 VMIDNAMESTATUSTIMESIZEFILENAME 100Win2008OK00:44:5178.98GB
 /STORAGE03/dump/vzdump-qemu-100-2014_10_17-23_00_02.vma.lzo TOTAL00:44:51
 78.98GB
 
This from my backup:
128ok   00:29:26
17.08GB  /mnt/pve/qnap_nfs/dump/vzdump-qemu-128-2014_10_18-05_31_23.vma.lzo

In generel I see backup speed around 25 MB/s on my relatively slow Qnap.

One question: Do you make backup to the same storage as the VM's reside
on?

If this is the case then I would imagine that speed were halfed.

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael at rasmussen dot cc
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E
mir at datanom dot net
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C
mir at miras dot org
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917
--
/usr/games/fortune -es says:
I keep seeing spots in front of my eyes.
Did you ever see a doctor?
No, just spots.


pgpVk62XBlj56.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup

2014-10-18 Thread Gilberto Nunes
Oh Dear! Qnap! Pity of you! Qnap has only 128 MB of memory and low
hardware...
Here, I do backup locally, in an External HDD USB
To one server, I saw here around 15-18 MB/s, but there's some ocasion that
I saw less than 10 MB/s
Compare to other server, I saw 60-80 MB/s...

2014-10-18 13:27 GMT-03:00 Michael Rasmussen m...@datanom.net:

 On Sat, 18 Oct 2014 13:03:57 -0300
 Gilberto Nunes gilberto.nune...@gmail.com wrote:

  Well...
 
  I have just one node here, because the other server doesn't arrive yet,
 but
  I already create the cluster...
  And my backup task are extremaly slow!
  It a file about 23 GB, that take almost 5 hours.
  Ok! I ran backup over LVM-on-top-iSCSI
 
  VMIDNAMESTATUSTIMESIZEFILENAME 103Windows2012OK04:53:4122.52GB
  /BKPVMS/dump/vzdump-qemu-103-2014_10_17-18_30_02.vma.lzo TOTAL04:53:41
  22.52GB
 
 
  On the other hand, this is another server , and a file with about 80 GB
 of
  size, take almost 45 minutes!
 
 
  VMIDNAMESTATUSTIMESIZEFILENAME 100Win2008OK00:44:5178.98GB
  /STORAGE03/dump/vzdump-qemu-100-2014_10_17-23_00_02.vma.lzo TOTAL00:44:51
  78.98GB
 
 This from my backup:
 128ok   00:29:26
 17.08GB  /mnt/pve/qnap_nfs/dump/vzdump-qemu-128-2014_10_18-05_31_23.vma.lzo

 In generel I see backup speed around 25 MB/s on my relatively slow Qnap.

 One question: Do you make backup to the same storage as the VM's reside
 on?

 If this is the case then I would imagine that speed were halfed.

 --
 Hilsen/Regards
 Michael Rasmussen

 Get my public GnuPG keys:
 michael at rasmussen dot cc
 http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E
 mir at datanom dot net
 http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C
 mir at miras dot org
 http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917
 --
 /usr/games/fortune -es says:
 I keep seeing spots in front of my eyes.
 Did you ever see a doctor?
 No, just spots.

 ___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel




-- 
--

Gilberto Ferreira
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup

2014-10-18 Thread Lindsay Mathieson
On Sat, 18 Oct 2014 10:56:18 AM Dietmar Maurer wrote:
  Maybe a setting in the backup gui when checked prevents running parallel
  backups could solve this issue?
 
 It is quite easy to setup a separate job for each node.


Not if you need to exclude some vm's, but migrate them regularly for load 
balancing.
-- 
Lindsay

signature.asc
Description: This is a digitally signed message part.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup ceph high iops and slow

2014-10-17 Thread Dmitry Petuhov

16.10.2014 22:33, VELARTIS Philipp Dürhammer пишет:

Why do backups with ceph make so high iops?
I get around 600 iops for 40mb/sec which is by the way very slow for a backup.
When I make a disk clone from local to ceph I get 120mb/sec (which is the 
network limit from the old proxmox nodes) and only around 100-120 iops which is 
the normal for a seq read with 120mb/sec...
Is the vzdump making problems?
It's not vzdump's issue, but qemu's. See 
http://forum.proxmox.com/threads/19341-Ceph-backup


--
С уважением,
  Дмитрий Петухов
  Ведущий системный администратор
  Отдел информационных технологий
  ЗАО Электро-ком
  phone: +79209755610
  email: d.petu...@electro-com.ru

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup ceph high iops and slow

2014-10-17 Thread Alexandre DERUMIER
I have talked with ceph and qemu dev about this. (mainly from drive-mirror, but 
it's almost the same).

It's missing some feature in rbd block driver to skip zero block on read.

for restore no problem.

For drive-mirror write to target, it's also missing a feature to skip 
zero-block write.


- Mail original -

De: Dmitry Petuhov mityapetu...@gmail.com
À: pve-devel@pve.proxmox.com
Envoyé: Vendredi 17 Octobre 2014 08:04:04
Objet: Re: [pve-devel] backup ceph high iops and slow

16.10.2014 22:33, VELARTIS Philipp Dürhammer пишет:
 Why do backups with ceph make so high iops?
 I get around 600 iops for 40mb/sec which is by the way very slow for a backup.
 When I make a disk clone from local to ceph I get 120mb/sec (which is the 
 network limit from the old proxmox nodes) and only around 100-120 iops which 
 is the normal for a seq read with 120mb/sec...
 Is the vzdump making problems?
It's not vzdump's issue, but qemu's. See
http://forum.proxmox.com/threads/19341-Ceph-backup

--
С уважением,
Дмитрий Петухов
Ведущий системный администратор
Отдел информационных технологий
ЗАО Электро-ком
phone: +79209755610
email: d.petu...@electro-com.ru

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup ceph high iops and slow

2014-10-17 Thread Dmitry Petuhov
I think that skipping free space isn't main issue: backup of used space 
is even slower.


17.10.2014 10:31, Alexandre DERUMIER пишет:

I have talked with ceph and qemu dev about this. (mainly from drive-mirror, but 
it's almost the same).

It's missing some feature in rbd block driver to skip zero block on read.

for restore no problem.

For drive-mirror write to target, it's also missing a feature to skip 
zero-block write.


- Mail original -

De: Dmitry Petuhov mityapetu...@gmail.com
À: pve-devel@pve.proxmox.com
Envoyé: Vendredi 17 Octobre 2014 08:04:04
Objet: Re: [pve-devel] backup ceph high iops and slow

16.10.2014 22:33, VELARTIS Philipp Dürhammer пишет:

Why do backups with ceph make so high iops?
I get around 600 iops for 40mb/sec which is by the way very slow for a backup.
When I make a disk clone from local to ceph I get 120mb/sec (which is the 
network limit from the old proxmox nodes) and only around 100-120 iops which is 
the normal for a seq read with 120mb/sec...
Is the vzdump making problems?

It's not vzdump's issue, but qemu's. See
http://forum.proxmox.com/threads/19341-Ceph-backup




--
С уважением,
  Дмитрий Петухов
  Ведущий системный администратор
  Отдел информационных технологий
  ЗАО Электро-ком
  phone: +79209755610
  email: d.petu...@electro-com.ru

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup ceph high iops and slow

2014-10-17 Thread VELARTIS Philipp Dürhammer
I agree. The main problem is with used space. 
I didnt check for mirroring - but reading unused space for backup is super fast 
(around 1200mb/sec)
But used space only 40-50mb/sec and too many iops... 

-Ursprüngliche Nachricht-
Von: pve-devel [mailto:pve-devel-boun...@pve.proxmox.com] Im Auftrag von Dmitry 
Petuhov
Gesendet: Freitag, 17. Oktober 2014 09:25
An: Alexandre DERUMIER
Cc: pve-devel@pve.proxmox.com
Betreff: Re: [pve-devel] backup ceph high iops and slow

I think that skipping free space isn't main issue: backup of used space is even 
slower.

17.10.2014 10:31, Alexandre DERUMIER пишет:
 I have talked with ceph and qemu dev about this. (mainly from drive-mirror, 
 but it's almost the same).

 It's missing some feature in rbd block driver to skip zero block on read.

 for restore no problem.

 For drive-mirror write to target, it's also missing a feature to skip 
 zero-block write.


 - Mail original -

 De: Dmitry Petuhov mityapetu...@gmail.com
 À: pve-devel@pve.proxmox.com
 Envoyé: Vendredi 17 Octobre 2014 08:04:04
 Objet: Re: [pve-devel] backup ceph high iops and slow

 16.10.2014 22:33, VELARTIS Philipp Dürhammer пишет:
 Why do backups with ceph make so high iops?
 I get around 600 iops for 40mb/sec which is by the way very slow for a 
 backup.
 When I make a disk clone from local to ceph I get 120mb/sec (which is the 
 network limit from the old proxmox nodes) and only around 100-120 iops which 
 is the normal for a seq read with 120mb/sec...
 Is the vzdump making problems?
 It's not vzdump's issue, but qemu's. See 
 http://forum.proxmox.com/threads/19341-Ceph-backup



--
С уважением,
   Дмитрий Петухов
   Ведущий системный администратор
   Отдел информационных технологий
   ЗАО Электро-ком
   phone: +79209755610
   email: d.petu...@electro-com.ru

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup ceph high iops and slow

2014-10-17 Thread Alexandre DERUMIER
We read 64K blocks, so

Don't for for backup, but drive-mirror have a granulary option to change the 
block size:

 # @granularity: #optional granularity of the dirty bitmap, default is 64K.
 #   Must be a power of 2 between 512 and 64M.


- Mail original -

De: Dietmar Maurer diet...@proxmox.com
À: VELARTIS Philipp Dürhammer p.duerham...@velartis.at, Dmitry Petuhov 
mityapetu...@gmail.com, Alexandre DERUMIER aderum...@odiso.com
Cc: pve-devel@pve.proxmox.com
Envoyé: Vendredi 17 Octobre 2014 13:02:29
Objet: RE: [pve-devel] backup ceph high iops and slow

 I agree. The main problem is with used space.
 I didnt check for mirroring - but reading unused space for backup is super 
 fast
 (around 1200mb/sec) But used space only 40-50mb/sec and too many iops... 

We read 64K blocks, so

4/64 == 625 IOPS

Maybe the pending ceph read-ahead changes solves that problem? ___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] backup ceph high iops and slow

2014-10-16 Thread VELARTIS Philipp Dürhammer
Why do backups with ceph make so high iops?
I get around 600 iops for 40mb/sec which is by the way very slow for a backup. 
When I make a disk clone from local to ceph I get 120mb/sec (which is the 
network limit from the old proxmox nodes) and only around 100-120 iops which is 
the normal for a seq read with 120mb/sec...
Is the vzdump making problems?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] Backup fails

2014-09-13 Thread Michael Rasmussen
Hi all,

Recently I am beginning to see logs like can be seen below. It is the
automatic backup which fails backing up some of the VMs but it seems
completely random which VM will fail backup when backup is performed.

Another observation is this warning given when every backup starts: Use
of uninitialized value $cmd[8] in exec

INFO: starting new backup job: vzdump 109 114 115 117 128 113 143 144
148 --quiet 1 --mailto f...@bar.tld --mode snapshot --compress lzo
--storage qnap_nfs INFO: skip external VMs: 109, 114, 117, 113, 144
INFO: Starting Backup of VM 115 (qemu) 
INFO: status = running
INFO: update VM 115: -lock backup 
Use of uninitialized value $cmd[8] in exec
at /usr/share/perl/5.14/IPC/Open3.pm line 186. 
Use of uninitialized value $cmd[8] in exec
at /usr/share/perl/5.14/IPC/Open3.pm line 186. 
INFO: backup mode: snapshot INFO: bandwidth limit: 5 KB/s 
INFO: ionice priority: 7 
INFO: skip unused drive 'omnios_nfs:115/vm-115-disk-1.qcow2' (not
included into backup) INFO: creating archive
'/mnt/pve/qnap_nfs/dump/vzdump-qemu-115-2014_09_13-05_15_01.vma.lzo'
INFO: started backup task 'c94ecd45-6ffa-44fd-9cb9-7433982ddccf' 
INFO: status: 0% (111869952/21474836480), sparse 0% (13647872),
duration 3, 37/32 MB/s 
INFO: status: 1% (217513984/21474836480), sparse
0% (41357312), duration 6, 35/25 MB/s 
INFO: status: 2% (466550784/21474836480), sparse 0% (41754624),
duration 13, 35/35 MB/s 
INFO: status: 3% (664010752/21474836480), sparse 0% (41779200),
duration 19, 32/32 MB/s 
INFO: status: 4% (866779136/21474836480), sparse 0% (41779200),
duration 26, 28/28 MB/s 
INFO: status: 5% (1075707904/21474836480), sparse 0% (42557440),
duration 36, 20/20 MB/s 
INFO: status: 6% (1308884992/21474836480), sparse 0% (42557440),
duration 55, 12/12 MB/s 
INFO: status: 7% (1521549312/21474836480), sparse 0% (42557440),
duration 67, 17/17 MB/s 
INFO: status: 8% (1730674688/21474836480), sparse 0% (53166080),
duration 73, 34/33 MB/s 
INFO: status: 9% (1935212544/21474836480), sparse 0% (53321728),
duration 79, 34/34 MB/s 
INFO: status: 10% (2172780544/21474836480), sparse 0% (53321728),
duration 85, 39/39 MB/s 
INFO: status: 11% (2390884352/21474836480), sparse 0% (83255296),
duration 106, 10/8 MB/s 
INFO: status: 12% (2577137664/21474836480), sparse 0% (83255296),
duration 117, 16/16 MB/s 
INFO: status: 13% (2828009472/21474836480), sparse 0% (83255296),
duration 124, 35/35 MB/s 
INFO: status: 14% (3007053824/21474836480), sparse 0% (83255296),
duration 130, 29/29 MB/s 
INFO: status: 15% (3255173120/21474836480), sparse 0% (83255296),
duration 155, 9/9 MB/s 
INFO: status: 16% (3462791168/21474836480), sparse 0% (83259392),
duration 167, 17/17 MB/s 
INFO: status: 17% (3662348288/21474836480), sparse 0% (83259392),
duration 172, 39/39 MB/s 
INFO: status: 18% (3889430528/21474836480), sparse 0% (83259392),
duration 198, 8/8 MB/s 
INFO: status: 19% (4090167296/21474836480), sparse 0% (95260672),
duration 204, 33/31 MB/s lzop: Input/output error: stdout 
INFO: status: 19% (4116578304/21474836480), sparse 0% (105844736),
duration 215, 2/1 MB/s 
ERROR: vma_queue_write: write error - Broken pipe INFO: aborting backup
job 
ERROR: Backup of VM 115 failed - vma_queue_write: write error - Broken
pipe

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael at rasmussen dot cc
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E
mir at datanom dot net
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C
mir at miras dot org
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917
--
/usr/games/fortune -es says:
You'd like to do it instantaneously, but that's too slow.


pgpmRCJsJ7Nsu.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Backup Scheduling

2014-07-06 Thread Dietmar Maurer
 So the WebGUI for backups basically edits a cron job?

yes

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Backup Scheduling

2014-07-05 Thread Lindsay Mathieson
On Sun, 6 Jul 2014 03:25:16 AM Dietmar Maurer wrote:
  Any possibility of adding per month schedules to backups? this would be
  very useful for our DR backups, where we rotate offsite disks once per
  month.
 cron supports that, so it should not be that hard to implement it (but
 someone needs to write that patch).


So the WebGUI for backups basically edits a cron job?
-- 
Lindsay

signature.asc
Description: This is a digitally signed message part.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Backup code / part

2013-02-16 Thread Stefan Priebe

No answer myself. No it's not working for rbd:

-
INFO: starting new backup job: vzdump 123 --remove 0 --mode snapshot 
--compress lzo --storage local --node node1283

INFO: Starting Backup of VM 123 (qemu)
INFO: status = running
ERROR: Backup of VM 123 failed - no such volume 'vmstorssd1:vm-123-disk-1'
INFO: Backup job finished with errors
TASK ERROR: job errors

-

Stefan
Am 16.02.2013 11:30, schrieb Stefan Priebe - Profihost AG:

Is the backup code ready for testing? Should it work with rbd? Anything special 
I should focus on?

Stefan


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Backup code / part

2013-02-16 Thread Alexandre DERUMIER
No answer myself. No it's not working for rbd: 

A forum user have the same problem. I think it shouldn't use vzdump --mode 
snapshot, maybe the new backup system is not yet switched for all storages ?
- Mail original - 

De: Stefan Priebe s.pri...@profihost.ag 
À: pve-devel@pve.proxmox.com 
Envoyé: Samedi 16 Février 2013 13:16:21 
Objet: Re: [pve-devel] Backup code / part 

No answer myself. No it's not working for rbd: 

- 
INFO: starting new backup job: vzdump 123 --remove 0 --mode snapshot 
--compress lzo --storage local --node node1283 
INFO: Starting Backup of VM 123 (qemu) 
INFO: status = running 
ERROR: Backup of VM 123 failed - no such volume 'vmstorssd1:vm-123-disk-1' 
INFO: Backup job finished with errors 
TASK ERROR: job errors 

- 

Stefan 
Am 16.02.2013 11:30, schrieb Stefan Priebe - Profihost AG: 
 Is the backup code ready for testing? Should it work with rbd? Anything 
 special I should focus on? 
 
 Stefan 
 
___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Backup code / part

2013-02-16 Thread Dietmar Maurer
 I can Look too should it generally work through vzdump and mode snapshot

yes

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] backup patches

2012-11-21 Thread Dietmar Maurer
I just posted the backup patches to the qemu-devel list.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup RFC preview

2012-11-19 Thread Alexandre DERUMIER
: error: too many arguments to function ‘vma_reader_get_device_info’
vma.c:257: warning: assignment makes pointer from integer without a cast
vma.c:273: warning: passing argument 2 of ‘vma_reader_register_bs’ makes 
pointer from integer without a cast
vma.h:133: note: expected ‘struct BlockDriverState *’ but argument is of type 
‘int’
vma.c:273: error: incompatible type for argument 4 of ‘vma_reader_register_bs’
vma.h:133: note: expected ‘struct Error **’ but argument is of type ‘_Bool’
vma.c:273: error: too many arguments to function ‘vma_reader_register_bs’
vma.c:289: error: too many arguments to function ‘vma_reader_get_device_info’
vma.c:291: warning: initialization makes pointer from integer without a cast
vma.c:292: warning: initialization makes pointer from integer without a cast
vma.c: In function ‘create_archive’:
vma.c:345: error: ‘GList’ undeclared (first use in this function)
vma.c:345: error: ‘config_files’ undeclared (first use in this function)
vma.c:383: error: ‘l’ undeclared (first use in this function)
vma.c:387: error: ‘gsize’ undeclared (first use in this function)
vma.c:387: error: expected ‘;’ before ‘clen’
vma.c:388: error: ‘GError’ undeclared (first use in this function)
vma.c:388: error: ‘err’ undeclared (first use in this function)
vma.c:389: error: ‘clen’ undeclared (first use in this function)
vma.c:394: warning: passing argument 3 of ‘vma_writer_add_config’ makes integer 
from pointer without a cast
vma.h:115: note: expected ‘size_t’ but argument is of type ‘char *’
vma.c:394: error: too many arguments to function ‘vma_writer_add_config’
vma.c:421: error: expected expression before ‘BackupCB’
make: *** [vma] Error 1


- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: pve-devel@pve.proxmox.com 
Envoyé: Mardi 13 Novembre 2012 14:04:02 
Objet: [pve-devel] backup RFC preview 



This is a preview of the planed backup feature. 

The following patches are against latest qemu.git 

I plan to send the patch to the qemu-devel list next week. 

Before I do that, I would like to get some feedback about: 

* functionality 
* implementation style 
* possible improvements 

Feel free to ask questions ;-) 

- Dietmar 
___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup RFC preview

2012-11-19 Thread Dietmar Maurer
Just sent an updated version -  does that work better?

 -Original Message-
 From: Alexandre DERUMIER [mailto:aderum...@odiso.com]
 Sent: Montag, 19. November 2012 12:24
 To: Dietmar Maurer
 Cc: pve-devel@pve.proxmox.com
 Subject: Re: [pve-devel] backup RFC preview
 
 
 Does it help when you specify the correct target-list (else it tries to 
 build all
 targets)
 
 ./configure --target-list x86_64-softmmu ...
 
 Doesn't help, but I have found the problem, I have a part of the patch which
 doesn't apply
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup RFC preview

2012-11-19 Thread Alexandre DERUMIER
Just sent an updated version - does that work better? 
no :/

Last qemu git:

I got Hunk Failed on Makefile

root@kvmtest1:~/qemu2/qemu# patch -p1  patch1.patch 
patching file docs/backup-rfc.txt
root@kvmtest1:~/qemu2/qemu# patch -p1  patch2.patch 
patching file Makefile.objs
Hunk #1 succeeded at 48 (offset 4 lines).
patching file backup.c
patching file block.c
patching file block.h
patching file block_int.h
root@kvmtest1:~/qemu2/qemu# patch -p1  patch3.patch 
patching file Makefile
Hunk #2 FAILED at 186.
1 out of 2 hunks FAILED -- saving rejects to file Makefile.rej
patching file docs/specs/vma_spec.txt
patching file vma-reader.c
patching file vma-writer.c
patching file vma.c
patching file vma.h
root@kvmtest1:~/qemu2/qemu# patch -p1  patch4.patch 
patching file Makefile
Hunk #1 FAILED at 186.
1 out of 1 hunk FAILED -- saving rejects to file Makefile.rej
patching file Makefile.objs
Hunk #1 succeeded at 48 (offset 4 lines).
patching file blockdev.c
patching file hmp-commands.hx
patching file hmp.c
patching file hmp.h
patching file monitor.c
patching file qapi-schema.json
patching file qmp-commands.hx
root@kvmtest1:~/qemu2/qemu# patch -p1  patch5.patch 
patching file include/qemu/ratelimit.h
patching file tests/Makefile
patching file tests/backup-test.c



After manually add to Makefile
 

+vma$(EXESUF): vma.o vma-reader.o $(tools-obj-y) $(block-obj-y)

I get same error:


#./configure --target-list=x86_64-softmmu --prefix=/usr --datadir=/usr/share 
--docdir=/usr/share/doc/pve-qemu-kvm --sysconfdir=/etc --disable-xen

# make
  CCasync.o
  CCnbd.o
  CCblock.o
  CCblockjob.o
  CCaes.o
  CCqemu-config.o
  CCthread-pool.o
  CCqemu-progress.o
  CCuri.o
  CCnotify.o
  CCvma-writer.o
  CCbackup.o
  CCqemu-coroutine.o
  CCqemu-coroutine-lock.o
  CCqemu-coroutine-io.o
  CCqemu-coroutine-sleep.o
  CCcoroutine-ucontext.o
  CCevent_notifier-posix.o
  CCaio-posix.o
  CCblock/raw.o
  CCblock/cow.o
  CCblock/qcow.o
  CCblock/vdi.o
  CCblock/vmdk.o
  CCblock/cloop.o
  CCblock/dmg.o
  CCblock/bochs.o
  CCblock/vpc.o
  CCblock/vvfat.o
  CCblock/qcow2.o
  CCblock/qcow2-refcount.o
  CCblock/qcow2-cluster.o
  CCblock/qcow2-snapshot.o
  CCblock/qcow2-cache.o
  CCblock/qed.o
  CCblock/qed-gencb.o
  CCblock/qed-l2-cache.o
  CCblock/qed-table.o
  CCblock/qed-cluster.o
  CCblock/qed-check.o
  CCblock/parallels.o
  CCblock/blkdebug.o
  CCblock/blkverify.o
  CCblock/raw-posix.o
  CCblock/linux-aio.o
  CCblock/nbd.o
  CCblock/sheepdog.o
  CCblock/iscsi.o
  CCblock/curl.o
  CCblock/rbd.o
  LINK  qemu-nbd
  GEN   qemu-img-cmds.h
  CCqemu-img.o
  LINK  qemu-img
  CCqemu-io.o
  CCcmd.o
  LINK  qemu-io
  CCfsdev/virtfs-proxy-helper.o
  CCfsdev/virtio-9p-marshal.o
  LINK  fsdev/virtfs-proxy-helper
  CCvma.o
  CCvma-reader.o
  LINK  vma
osdep.o: In function `qemu_close':
/root/qemu2/qemu/osdep.c:212: undefined reference to `monitor_fdset_dup_fd_find'
/root/qemu2/qemu/osdep.c:218: undefined reference to 
`monitor_fdset_dup_fd_remove'
osdep.o: In function `qemu_open':
/root/qemu2/qemu/osdep.c:166: undefined reference to `monitor_fdset_get_fd'
/root/qemu2/qemu/osdep.c:176: undefined reference to `monitor_fdset_dup_fd_add'
qemu-sockets.o: In function `socket_connect':
/root/qemu2/qemu/qemu-sockets.c:906: undefined reference to `monitor_get_fd'
qemu-sockets.o: In function `socket_listen':
/root/qemu2/qemu/qemu-sockets.c:937: undefined reference to `monitor_get_fd'
collect2: ld returned 1 exit status
make: *** [vma] Error 1




note : 
patch3 add:
+vma$(EXESUF): vma.o vma-writer.o vma-reader.o $(tools-obj-y) $(block-obj-y)


and patch 4:
-vma$(EXESUF): vma.o vma-writer.o vma-reader.o $(tools-obj-y) $(block-obj-y)
+vma$(EXESUF): vma.o vma-reader.o $(tools-obj-y) $(block-obj-y)



- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Lundi 19 Novembre 2012 12:30:23 
Objet: RE: [pve-devel] backup RFC preview 

Just sent an updated version - does that work better? 

 -Original Message- 
 From: Alexandre DERUMIER [mailto:aderum...@odiso.com] 
 Sent: Montag, 19. November 2012 12:24 
 To: Dietmar Maurer 
 Cc: pve-devel@pve.proxmox.com 
 Subject: Re: [pve-devel] backup RFC preview 
 
 
 Does it help when you specify the correct target-list (else it tries to 
 build all 
 targets) 
  
 ./configure --target-list x86_64-softmmu ... 
 
 Doesn't help, but I have found the problem, I have a part of the patch which 
 doesn't apply 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup RFC preview

2012-11-19 Thread Dietmar Maurer
My patches are against commit 6801038bc52d61f81ac8a25fbe392f1bad982887

Try to use that commit.

Maybe I should expose my internal qemu-git tree?

 -Original Message-
 From: Alexandre DERUMIER [mailto:aderum...@odiso.com]
 Sent: Montag, 19. November 2012 13:33
 To: Dietmar Maurer
 Cc: pve-devel@pve.proxmox.com
 Subject: Re: [pve-devel] backup RFC preview
 
 Just sent an updated version - does that work better?
 no :/
 
 Last qemu git:
 
 I got Hunk Failed on Makefile
 
 root@kvmtest1:~/qemu2/qemu# patch -p1  patch1.patch patching file
 docs/backup-rfc.txt root@kvmtest1:~/qemu2/qemu# patch -p1 
 patch2.patch patching file Makefile.objs Hunk #1 succeeded at 48 (offset 4
 lines).
 patching file backup.c
 patching file block.c
 patching file block.h
 patching file block_int.h
 root@kvmtest1:~/qemu2/qemu# patch -p1  patch3.patch patching file
 Makefile Hunk #2 FAILED at 186.
 1 out of 2 hunks FAILED -- saving rejects to file Makefile.rej patching file
 docs/specs/vma_spec.txt patching file vma-reader.c patching file vma-
 writer.c patching file vma.c patching file vma.h
 root@kvmtest1:~/qemu2/qemu# patch -p1  patch4.patch patching file
 Makefile Hunk #1 FAILED at 186.
 1 out of 1 hunk FAILED -- saving rejects to file Makefile.rej patching file
 Makefile.objs Hunk #1 succeeded at 48 (offset 4 lines).
 patching file blockdev.c
 patching file hmp-commands.hx
 patching file hmp.c
 patching file hmp.h
 patching file monitor.c
 patching file qapi-schema.json
 patching file qmp-commands.hx
 root@kvmtest1:~/qemu2/qemu# patch -p1  patch5.patch patching file
 include/qemu/ratelimit.h patching file tests/Makefile patching file
 tests/backup-test.c
 
 
 
 After manually add to Makefile
 
 
 +vma$(EXESUF): vma.o vma-reader.o $(tools-obj-y) $(block-obj-y)
 
 I get same error:
 
 
 #./configure --target-list=x86_64-softmmu --prefix=/usr --datadir=/usr/share
 --docdir=/usr/share/doc/pve-qemu-kvm --sysconfdir=/etc --disable-xen
 
 # make
   CCasync.o
   CCnbd.o
   CCblock.o
   CCblockjob.o
   CCaes.o
   CCqemu-config.o
   CCthread-pool.o
   CCqemu-progress.o
   CCuri.o
   CCnotify.o
   CCvma-writer.o
   CCbackup.o
   CCqemu-coroutine.o
   CCqemu-coroutine-lock.o
   CCqemu-coroutine-io.o
   CCqemu-coroutine-sleep.o
   CCcoroutine-ucontext.o
   CCevent_notifier-posix.o
   CCaio-posix.o
   CCblock/raw.o
   CCblock/cow.o
   CCblock/qcow.o
   CCblock/vdi.o
   CCblock/vmdk.o
   CCblock/cloop.o
   CCblock/dmg.o
   CCblock/bochs.o
   CCblock/vpc.o
   CCblock/vvfat.o
   CCblock/qcow2.o
   CCblock/qcow2-refcount.o
   CCblock/qcow2-cluster.o
   CCblock/qcow2-snapshot.o
   CCblock/qcow2-cache.o
   CCblock/qed.o
   CCblock/qed-gencb.o
   CCblock/qed-l2-cache.o
   CCblock/qed-table.o
   CCblock/qed-cluster.o
   CCblock/qed-check.o
   CCblock/parallels.o
   CCblock/blkdebug.o
   CCblock/blkverify.o
   CCblock/raw-posix.o
   CCblock/linux-aio.o
   CCblock/nbd.o
   CCblock/sheepdog.o
   CCblock/iscsi.o
   CCblock/curl.o
   CCblock/rbd.o
   LINK  qemu-nbd
   GEN   qemu-img-cmds.h
   CCqemu-img.o
   LINK  qemu-img
   CCqemu-io.o
   CCcmd.o
   LINK  qemu-io
   CCfsdev/virtfs-proxy-helper.o
   CCfsdev/virtio-9p-marshal.o
   LINK  fsdev/virtfs-proxy-helper
   CCvma.o
   CCvma-reader.o
   LINK  vma
 osdep.o: In function `qemu_close':
 /root/qemu2/qemu/osdep.c:212: undefined reference to
 `monitor_fdset_dup_fd_find'
 /root/qemu2/qemu/osdep.c:218: undefined reference to
 `monitor_fdset_dup_fd_remove'
 osdep.o: In function `qemu_open':
 /root/qemu2/qemu/osdep.c:166: undefined reference to
 `monitor_fdset_get_fd'
 /root/qemu2/qemu/osdep.c:176: undefined reference to
 `monitor_fdset_dup_fd_add'
 qemu-sockets.o: In function `socket_connect':
 /root/qemu2/qemu/qemu-sockets.c:906: undefined reference to
 `monitor_get_fd'
 qemu-sockets.o: In function `socket_listen':
 /root/qemu2/qemu/qemu-sockets.c:937: undefined reference to
 `monitor_get_fd'
 collect2: ld returned 1 exit status
 make: *** [vma] Error 1
 
 
 
 
 note :
 patch3 add:
 +vma$(EXESUF): vma.o vma-writer.o vma-reader.o $(tools-obj-y)
 +$(block-obj-y)
 
 
 and patch 4:
 -vma$(EXESUF): vma.o vma-writer.o vma-reader.o $(tools-obj-y) $(block-obj-
 y)
 +vma$(EXESUF): vma.o vma-reader.o $(tools-obj-y) $(block-obj-y)
 
 
 
 - Mail original -
 
 De: Dietmar Maurer diet...@proxmox.com
 À: Alexandre DERUMIER aderum...@odiso.com
 Cc: pve-devel@pve.proxmox.com
 Envoyé: Lundi 19 Novembre 2012 12:30:23
 Objet: RE: [pve-devel] backup RFC preview
 
 Just sent an updated version - does that work better?
 
  -Original Message-
  From: Alexandre DERUMIER [mailto:aderum...@odiso.com]
  Sent: Montag, 19. November 2012 12:24
  To: Dietmar Maurer
  Cc: pve-devel@pve.proxmox.com
  Subject: Re: [pve-devel] backup RFC preview
 
 
  Does it help when you specify the correct

Re: [pve-devel] backup RFC preview

2012-11-19 Thread Dietmar Maurer
 osdep.o: In function `qemu_close':
 /root/qemu2/qemu/osdep.c:212: undefined reference to
 `monitor_fdset_dup_fd_find'
 /root/qemu2/qemu/osdep.c:218: undefined reference to
 `monitor_fdset_dup_fd_remove'

AFAIK my patches does not touch those files - maybe current qemu build is 
broken?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup RFC preview

2012-11-19 Thread Alexandre DERUMIER
yes, maybe qemu build is broken :/

#git checkout 6801038bc52d61f81ac8a25fbe392f1bad982887

and now, seem to build fine :)



- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Lundi 19 Novembre 2012 13:41:03 
Objet: RE: [pve-devel] backup RFC preview 

 osdep.o: In function `qemu_close': 
 /root/qemu2/qemu/osdep.c:212: undefined reference to 
 `monitor_fdset_dup_fd_find' 
 /root/qemu2/qemu/osdep.c:218: undefined reference to 
 `monitor_fdset_dup_fd_remove' 

AFAIK my patches does not touch those files - maybe current qemu build is 
broken? 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup RFC preview

2012-11-19 Thread Alexandre DERUMIER
Maybe I should expose my internal qemu-git tree? 

Maybe can we add a new qemu 1.3 git in proxmox git ?

So we could help to port current pve patches to qemu 1.3 .




- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Lundi 19 Novembre 2012 13:37:54 
Objet: RE: [pve-devel] backup RFC preview 

My patches are against commit 6801038bc52d61f81ac8a25fbe392f1bad982887 

Try to use that commit. 

Maybe I should expose my internal qemu-git tree? 

 -Original Message- 
 From: Alexandre DERUMIER [mailto:aderum...@odiso.com] 
 Sent: Montag, 19. November 2012 13:33 
 To: Dietmar Maurer 
 Cc: pve-devel@pve.proxmox.com 
 Subject: Re: [pve-devel] backup RFC preview 
 
 Just sent an updated version - does that work better? 
 no :/ 
 
 Last qemu git: 
 
 I got Hunk Failed on Makefile 
 
 root@kvmtest1:~/qemu2/qemu# patch -p1  patch1.patch patching file 
 docs/backup-rfc.txt root@kvmtest1:~/qemu2/qemu# patch -p1  
 patch2.patch patching file Makefile.objs Hunk #1 succeeded at 48 (offset 4 
 lines). 
 patching file backup.c 
 patching file block.c 
 patching file block.h 
 patching file block_int.h 
 root@kvmtest1:~/qemu2/qemu# patch -p1  patch3.patch patching file 
 Makefile Hunk #2 FAILED at 186. 
 1 out of 2 hunks FAILED -- saving rejects to file Makefile.rej patching file 
 docs/specs/vma_spec.txt patching file vma-reader.c patching file vma- 
 writer.c patching file vma.c patching file vma.h 
 root@kvmtest1:~/qemu2/qemu# patch -p1  patch4.patch patching file 
 Makefile Hunk #1 FAILED at 186. 
 1 out of 1 hunk FAILED -- saving rejects to file Makefile.rej patching file 
 Makefile.objs Hunk #1 succeeded at 48 (offset 4 lines). 
 patching file blockdev.c 
 patching file hmp-commands.hx 
 patching file hmp.c 
 patching file hmp.h 
 patching file monitor.c 
 patching file qapi-schema.json 
 patching file qmp-commands.hx 
 root@kvmtest1:~/qemu2/qemu# patch -p1  patch5.patch patching file 
 include/qemu/ratelimit.h patching file tests/Makefile patching file 
 tests/backup-test.c 
 
 
 
 After manually add to Makefile 
 
 
 +vma$(EXESUF): vma.o vma-reader.o $(tools-obj-y) $(block-obj-y) 
 
 I get same error: 
 
 
 #./configure --target-list=x86_64-softmmu --prefix=/usr --datadir=/usr/share 
 --docdir=/usr/share/doc/pve-qemu-kvm --sysconfdir=/etc --disable-xen 
 
 # make 
 CC async.o 
 CC nbd.o 
 CC block.o 
 CC blockjob.o 
 CC aes.o 
 CC qemu-config.o 
 CC thread-pool.o 
 CC qemu-progress.o 
 CC uri.o 
 CC notify.o 
 CC vma-writer.o 
 CC backup.o 
 CC qemu-coroutine.o 
 CC qemu-coroutine-lock.o 
 CC qemu-coroutine-io.o 
 CC qemu-coroutine-sleep.o 
 CC coroutine-ucontext.o 
 CC event_notifier-posix.o 
 CC aio-posix.o 
 CC block/raw.o 
 CC block/cow.o 
 CC block/qcow.o 
 CC block/vdi.o 
 CC block/vmdk.o 
 CC block/cloop.o 
 CC block/dmg.o 
 CC block/bochs.o 
 CC block/vpc.o 
 CC block/vvfat.o 
 CC block/qcow2.o 
 CC block/qcow2-refcount.o 
 CC block/qcow2-cluster.o 
 CC block/qcow2-snapshot.o 
 CC block/qcow2-cache.o 
 CC block/qed.o 
 CC block/qed-gencb.o 
 CC block/qed-l2-cache.o 
 CC block/qed-table.o 
 CC block/qed-cluster.o 
 CC block/qed-check.o 
 CC block/parallels.o 
 CC block/blkdebug.o 
 CC block/blkverify.o 
 CC block/raw-posix.o 
 CC block/linux-aio.o 
 CC block/nbd.o 
 CC block/sheepdog.o 
 CC block/iscsi.o 
 CC block/curl.o 
 CC block/rbd.o 
 LINK qemu-nbd 
 GEN qemu-img-cmds.h 
 CC qemu-img.o 
 LINK qemu-img 
 CC qemu-io.o 
 CC cmd.o 
 LINK qemu-io 
 CC fsdev/virtfs-proxy-helper.o 
 CC fsdev/virtio-9p-marshal.o 
 LINK fsdev/virtfs-proxy-helper 
 CC vma.o 
 CC vma-reader.o 
 LINK vma 
 osdep.o: In function `qemu_close': 
 /root/qemu2/qemu/osdep.c:212: undefined reference to 
 `monitor_fdset_dup_fd_find' 
 /root/qemu2/qemu/osdep.c:218: undefined reference to 
 `monitor_fdset_dup_fd_remove' 
 osdep.o: In function `qemu_open': 
 /root/qemu2/qemu/osdep.c:166: undefined reference to 
 `monitor_fdset_get_fd' 
 /root/qemu2/qemu/osdep.c:176: undefined reference to 
 `monitor_fdset_dup_fd_add' 
 qemu-sockets.o: In function `socket_connect': 
 /root/qemu2/qemu/qemu-sockets.c:906: undefined reference to 
 `monitor_get_fd' 
 qemu-sockets.o: In function `socket_listen': 
 /root/qemu2/qemu/qemu-sockets.c:937: undefined reference to 
 `monitor_get_fd' 
 collect2: ld returned 1 exit status 
 make: *** [vma] Error 1 
 
 
 
 
 note : 
 patch3 add: 
 +vma$(EXESUF): vma.o vma-writer.o vma-reader.o $(tools-obj-y) 
 +$(block-obj-y) 
 
 
 and patch 4: 
 -vma$(EXESUF): vma.o vma-writer.o vma-reader.o $(tools-obj-y) $(block-obj- 
 y) 
 +vma$(EXESUF): vma.o vma-reader.o $(tools-obj-y) $(block-obj-y) 
 
 
 
 - Mail original - 
 
 De: Dietmar Maurer diet...@proxmox.com 
 À: Alexandre DERUMIER aderum...@odiso.com 
 Cc: pve-devel@pve.proxmox.com 
 Envoyé: Lundi 19 Novembre 2012 12:30:23 
 Objet: RE: [pve-devel] backup RFC preview 
 
 Just sent an updated version - does that work better? 
 
  -Original Message- 
  From

Re: [pve-devel] backup RFC preview

2012-11-19 Thread Alexandre DERUMIER
I can't get network working in qemu  (virtio,e1000)...

and I can't get vnc working. (I have make change in pve-auth.patch to get it 
apply correctly).

Any idea ?




- Mail original - 

De: Alexandre DERUMIER aderum...@odiso.com 
À: Stefan Priebe - Profihost AG s.pri...@profihost.ag 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Lundi 19 Novembre 2012 15:08:48 
Objet: Re: [pve-devel] backup RFC preview 

Mhm with -monitor the qm monitor shell stuff doesn't work I always get the 
failure that qmp human interface cannot be found. 

I can get it work in pve gui, with renaming qemu-system-x86_64 to kvm 

I have also apply patch: pve-auth.patch 



- Mail original - 

De: Stefan Priebe - Profihost AG s.pri...@profihost.ag 
À: Dietmar Maurer diet...@proxmox.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Dimanche 18 Novembre 2012 07:52:16 
Objet: Re: [pve-devel] backup RFC preview 

Am 18.11.2012 um 07:31 schrieb Dietmar Maurer diet...@proxmox.com: 

 # ./x86_64-softmmu/qemu-system-x86_64 -monitor stdio -enable-kvm - 
 hda image.raw ... 
 
 you meant -qmp not monitor? 
 
 no 

Mhm with -monitor the qm monitor shell stuff doesn't work I always get the 
failure that qmp human interface cannot be found. 

Stefan 

 
___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup RFC preview

2012-11-18 Thread Dietmar Maurer
  # ./x86_64-softmmu/qemu-system-x86_64 -monitor stdio  -enable-kvm -
  hda image.raw ...
 
  you meant -qmp not monitor?
 
  no
 
 Mhm with -monitor the qm monitor shell stuff doesn't work I always get the
 failure that qmp human interface cannot be found.

Sure, this does not work with 'qm'. The idea is to use that for testing only,
and it is simple convenient to read monitor commands from 'stdin' (you can
also see debugging output there).
 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup RFC preview

2012-11-18 Thread Stefan Priebe

Am 18.11.2012 10:28, schrieb Dietmar Maurer:

# ./x86_64-softmmu/qemu-system-x86_64 -monitor stdio  -enable-kvm -

hda image.raw ...

you meant -qmp not monitor?


no


Mhm with -monitor the qm monitor shell stuff doesn't work I always get the
failure that qmp human interface cannot be found.


Sure, this does not work with 'qm'. The idea is to use that for testing only,
and it is simple convenient to read monitor commands from 'stdin' (you can
also see debugging output there).


mhm OK i wanted to try your patches on a half prod. system and i need 
the monitor to workprobably. But i wasn't able to get it running stable 
with qemu.git.


Stefan
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup RFC preview

2012-11-17 Thread Stefan Priebe - Profihost AG
Am 18.11.2012 um 07:31 schrieb Dietmar Maurer diet...@proxmox.com:

 # ./x86_64-softmmu/qemu-system-x86_64 -monitor stdio  -enable-kvm -
 hda image.raw ...
 
 you meant -qmp not monitor?
 
 no

Mhm with -monitor the qm monitor shell stuff doesn't work I always get the 
failure that qmp human interface cannot be found.

Stefan

 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup RFC preview

2012-11-14 Thread Alexandre DERUMIER

  How can i test? Command line qm command? 

# git clone git://git.qemu-project.org/qemu.git 

then apply the patches. 

# cd qemu 
# ./configure --target-list=x86_64-softmmu 
# make 

You can then start qemu with something like that: 

# ./x86_64-softmmu/qemu-system-x86_64 -monitor stdio -enable-kvm -hda 
image.raw ... 


Are you going to use qemu instead qemu-kvm for 1.3 release ? (I have read that 
kvm will be finally officily merged in qemu for 1.3)




- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Stefan Priebe - Profihost AG s.pri...@profihost.ag 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Mercredi 14 Novembre 2012 07:18:36 
Objet: Re: [pve-devel] backup RFC preview 

  How can i test? Command line qm command? 

# git clone git://git.qemu-project.org/qemu.git 

then apply the patches. 

# cd qemu 
# ./configure --target-list=x86_64-softmmu 
# make 

You can then start qemu with something like that: 

# ./x86_64-softmmu/qemu-system-x86_64 -monitor stdio -enable-kvm -hda image.raw 
... 






___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup RFC preview

2012-11-14 Thread Dietmar Maurer
 Are you going to use qemu instead qemu-kvm for 1.3 release ? (I have read
 that kvm will be finally officily merged in qemu for 1.3)

Only if they announce that qemu-kvm is obsolete.

I currently use qemu for the backup tests because it is required 
to send patches against qemu.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup RFC preview

2012-11-14 Thread Alexandre DERUMIER
I currently use qemu for the backup tests because it is required 
to send patches against qemu. 

Oh, you want to push patches upstream ? Great !


- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com, Stefan Priebe - Profihost AG 
s.pri...@profihost.ag 
Envoyé: Mercredi 14 Novembre 2012 16:41:29 
Objet: RE: [pve-devel] backup RFC preview 

 Are you going to use qemu instead qemu-kvm for 1.3 release ? (I have read 
 that kvm will be finally officily merged in qemu for 1.3) 

Only if they announce that qemu-kvm is obsolete. 

I currently use qemu for the backup tests because it is required 
to send patches against qemu. 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup RFC preview

2012-11-14 Thread Dietmar Maurer
 I currently use qemu for the backup tests because it is required to
 send patches against qemu.
 
 Oh, you want to push patches upstream ? Great !

Yes, that is the plan. It would be great if we can get at least the
basic framework upstream (PATCH 2/4). 

Else we should think twice if we can maintain that ourselves.

Usually it is a pain to get such things upstream, and that is
why I posted it here first to get your opinions.

So please tell me:

- if anything is unclear
- should I add further docs
- any ideas for cleanups
- anything else which helps to get it upsrteam



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup RFC preview

2012-11-14 Thread Alexandre DERUMIER
Ok,
I'm not sure I have the skill to understand all the code,

but I'll begin tests today and tomorrow, and  I'll try to make a report next 
week.


- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com, Stefan Priebe - Profihost AG 
s.pri...@profihost.ag 
Envoyé: Mercredi 14 Novembre 2012 18:31:30 
Objet: RE: [pve-devel] backup RFC preview 

 I currently use qemu for the backup tests because it is required to 
 send patches against qemu. 
 
 Oh, you want to push patches upstream ? Great ! 

Yes, that is the plan. It would be great if we can get at least the 
basic framework upstream (PATCH 2/4). 

Else we should think twice if we can maintain that ourselves. 

Usually it is a pain to get such things upstream, and that is 
why I posted it here first to get your opinions. 

So please tell me: 

- if anything is unclear 
- should I add further docs 
- any ideas for cleanups 
- anything else which helps to get it upsrteam 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] backup RFC preview

2012-11-13 Thread Dietmar Maurer
This is a preview of the planed backup feature.

The following patches are against latest qemu.git

I plan to send the patch to the qemu-devel list next week.

Before I do that, I would like to get some feedback about:

* functionality
* implementation style
* possible improvements

Feel free to ask questions ;-)

- Dietmar
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup RFC preview

2012-11-13 Thread Alexandre DERUMIER
Thanks Dietmar,

I'll try to do a feedback tomorrow !


- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: pve-devel@pve.proxmox.com 
Envoyé: Mardi 13 Novembre 2012 14:04:02 
Objet: [pve-devel] backup RFC preview 



This is a preview of the planed backup feature. 

The following patches are against latest qemu.git 

I plan to send the patch to the qemu-devel list next week. 

Before I do that, I would like to get some feedback about: 

* functionality 
* implementation style 
* possible improvements 

Feel free to ask questions ;-) 

- Dietmar 
___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup RFC preview

2012-11-13 Thread Stefan Priebe - Profihost AG

Am 13.11.2012 14:04, schrieb Dietmar Maurer:

This is a preview of the planed backup feature.

The following patches are against latest qemu.git

I plan to send the patch to the qemu-devel list next week.

Before I do that, I would like to get some feedback about:

* functionality

* implementation style

* possible improvements

Feel free to ask questions ;-)


So this is right now only for qcow / disk based backups? Or a general 
solution?


How can i test? Command line qm command?

Greets,
Stefan
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] backup RFC preview

2012-11-13 Thread Dietmar Maurer
 So this is right now only for qcow / disk based backups? Or a general
 solution?

I do not know what you mean by 'general' solution? But it does not depend on 
qcow2,
instead it works on any storage type and file format.

 How can i test? Command line qm command?

Please use the qemu monitor command:

# backup filename




___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel