Re: [pve-devel] Qemu-img thin provision

2015-09-25 Thread Gilberto Nunes
Somebody? 2015-09-23 20:44 GMT-03:00 Gilberto Nunes : > Hi > > Can you, guys, adjust the qemu to make a thin provision, when the VM file > reach more than 500 GB or 1 TB?? > The reason I ask for that, is that very hard to wait qemu-img end up > create a huge VM file,

Re: [pve-devel] Qemu-img thin provision

2015-09-25 Thread Gilberto Nunes
Please Dietmar... I thinkg it will be good, because right now, I try create a VM with a little disk of 32 GB ( the default ) over NFS and tooks so long that I got time out... The NFS is already mounted with soft options in storage.cfg but no effect... Thanks 2015-09-25 12:45 GMT-03:00 Dietmar

[pve-devel] Add 'action domain' lock and sue it for the ha-manager

2015-09-25 Thread Thomas Lamprecht
An 'action domain' locks guarantees that under all calls using an domainname the passed code executed atomically. Indifferent if the have no common file to read/write to. This can be used in the ha-manger where such behaviour is needed to avoid parallel changes to different configs and command

[pve-devel] [PATCH pve-cluster 1/2] add function to lock a domain

2015-09-25 Thread Thomas Lamprecht
This can be used to execute code on an 'action domain' basis. E.g.: if there are actions that cannot be run simultaneously even if they, for example, don't access a common file and maybe also spread across different packages we can now secure the consistence of said actions on an 'action domain'

[pve-devel] [PATCH pve-ha-manager 2/2] Use new lock domain sub instead of storage lock

2015-09-25 Thread Thomas Lamprecht
Doesn't changes behaviour at all, but makes code clearer Signed-off-by: Thomas Lamprecht --- src/PVE/API2/HA/Groups.pm| 6 +++--- src/PVE/API2/HA/Resources.pm | 6 +++--- src/PVE/HA/Config.pm | 9 - 3 files changed, 10 insertions(+), 11 deletions(-)

Re: [pve-devel] Qemu-img thin provision

2015-09-25 Thread Dietmar Maurer
would be OK for me... > On September 25, 2015 at 5:27 PM Gilberto Nunes > wrote: > > > Somebody? > > 2015-09-23 20:44 GMT-03:00 Gilberto Nunes : > > > Hi > > > > Can you, guys, adjust the qemu to make a thin provision, when the VM file

Re: [pve-devel] [PATCH v2 pve-container 0/3] snapshot backup fixes v2

2015-09-25 Thread Dietmar Maurer
applied, thanks! ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

[pve-devel] Backup scheduler: Perl error

2015-09-25 Thread Michael Rasmussen
Hi all, Every time the backup scheduler runs is see this in log for every VM that is backed up: Use of uninitialized value $cmd[8] in exec at /usr/share/perl/5.14/IPC/Open3.pm line 186. proxmox-ve-2.6.32: 3.4-163 (running kernel: 3.10.0-11-pve) pve-manager: 3.4-11 (running version:

Re: [pve-devel] Qemu-img thin provision

2015-09-25 Thread Gilberto Nunes
UPDATE: Today, what I do is call qem-img create command, create the image file without the preallocate flag and just after that, attach the result image to the VM, using qm set... It's a lot of work... 2015-09-25 12:51 GMT-03:00 Gilberto Nunes : > Please Dietmar... I

[pve-devel] lxc live migration is currently not implemented, TASK ERROR: migration aborted

2015-09-25 Thread moula BADJI
Bonsoir, Is it possible that it will be implente in the stable 4.0 final? thank's. Moula ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Re: [pve-devel] lxc live migration is currently not implemented, TASK ERROR: migration aborted

2015-09-25 Thread Dietmar Maurer
> Is it possible that it will be implente in the stable 4.0 final? We try to implement that asap, maybe for 4.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Re: [pve-devel] Following tests on PvE 4.0 béta 2

2015-09-25 Thread Moula BADJI
It work's when i use a gui but never with commande!!! > Date: Thu, 24 Sep 2015 17:56:03 +0100 > From: moul...@hotmail.com > To: pve-devel@pve.proxmox.com > Subject: Re: [pve-devel] Following tests on PvE 4.0 béta 2 > > I try to create cluster ceph and i have the same error messages : > > #

Re: [pve-devel] Following tests on PvE 4.0 béta 2

2015-09-25 Thread Wolfgang Bumiller
Ah you're missing a / in front of the `dev/sdd`. The error message is misleading, oh and I see the wiki also misses the slash there. Does it work if you use `-journal_dev /dev/sdd` with the first "/" included? > On September 25, 2015 at 9:11 AM Moula BADJI wrote: > > > It

Re: [pve-devel] Following tests on PvE 4.0 béta 2

2015-09-25 Thread Dietmar Maurer
Maybe you should use '/dev/sdd' insteadf of 'dev/sdd'? > On September 25, 2015 at 9:11 AM Moula BADJI wrote: > > > It work's when i use a gui but never with commande!!! > > > Date: Thu, 24 Sep 2015 17:56:03 +0100 > > From: moul...@hotmail.com > > To:

Re: [pve-devel] cloudinit : is it still planned for proxmox 4 ?

2015-09-25 Thread Dietmar Maurer
> don't known if ubuntu will release 4.2.1 kernel soon and if it's fix that bug. its already updated - I am just recompiling the kernel. https://git.proxmox.com/?p=pve-kernel.git;a=commitdiff;h=695da5a3f06c060a44ca6ee9d92261c4ef951c37 ___ pve-devel

Re: [pve-devel] Following tests on PvE 4.0 béta 2

2015-09-25 Thread Moula BADJI
It work's. The wiki was modified also bye wolf. Thank's. > Date: Fri, 25 Sep 2015 09:45:28 +0200 > From: diet...@proxmox.com > To: moul...@hotmail.com; pve-devel@pve.proxmox.com > Subject: Re: [pve-devel] Following tests on PvE 4.0 béta 2 > > Maybe you should use '/dev/sdd' insteadf of

[pve-devel] [PATCH pve-manager] CephTools: improve abs_path error handling

2015-09-25 Thread Wolfgang Bumiller
verify_blockdev_path didn't check the result of abs_path causing commands like `pveceph createosd bad/path` to error with a meaningless "Use of uninitialized value" message. --- PVE/CephTools.pm | 6 -- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/PVE/CephTools.pm

Re: [pve-devel] adding live migration options ? (xbzlre, compression, ...)

2015-09-25 Thread Alexandre DERUMIER
I have tested xbzrle & compression. xbzrle seem to be pretty fine, I don't have bug with it. (tested with video player running in guest) also they are no overhead on 10gbe network without xbzlre: Sep 25 12:52:18 migration speed: 1092.27 MB/s - downtime 66 ms with xbzlre: Sep 25 13:30:17

[pve-devel] [PATCH v2 pve-container 1/3] vzdump:lxc: activate the right volumes

2015-09-25 Thread Wolfgang Bumiller
--- src/PVE/VZDump/LXC.pm | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/PVE/VZDump/LXC.pm b/src/PVE/VZDump/LXC.pm index 858db8f..a7fafe9 100644 --- a/src/PVE/VZDump/LXC.pm +++ b/src/PVE/VZDump/LXC.pm @@ -105,7 +105,6 @@ sub prepare { $task->{hostname} =

[pve-devel] [PATCH v2 pve-container 0/3] snapshot backup fixes v2

2015-09-25 Thread Wolfgang Bumiller
Changes: Rather than a generic mount option parameter for LXC::mountpoint_mount, we now simply always mount snapshots with noload. Wolfgang Bumiller (3): vzdump:lxc: activate the right volumes vzdump:lxc: sync and skip journal in snapshot mode mount snapshots with the noload option

[pve-devel] [PATCH v2 pve-container 3/3] mount snapshots with the noload option

2015-09-25 Thread Wolfgang Bumiller
When using block device based snapshots we cannot mount the filesystem as it's not clean, and we also can't replay the journal without write access (as even `-o ro` writes to devices when replaying a journal (see the linux docs under Documentation/filesystems/ext4.txt section 3 option 'ro')). So

[pve-devel] [PATCH v2 pve-container 2/3] vzdump:lxc: sync and skip journal in snapshot mode

2015-09-25 Thread Wolfgang Bumiller
We now perform a 'sync' after 'lxc-freeze' and before creating the snapshot, since we now mount snapshots with '-o noload' which skips the journal entirely. --- src/PVE/LXC.pm | 7 +++ 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm index

[pve-devel] [PATCH 2/2] migration: disable compress

2015-09-25 Thread Alexandre Derumier
it's already disable by default, but we want to be sure if it's change in later release Signed-off-by: Alexandre Derumier --- PVE/QemuServer.pm | 1 + 1 file changed, 1 insertion(+) diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm index 7f72daa..4906f2c 100644 ---

[pve-devel] [PATCH 1/2] enable xbzrle

2015-09-25 Thread Alexandre Derumier
Signed-off-by: Alexandre Derumier --- PVE/QemuServer.pm | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm index 5c34c3a..7f72daa 100644 --- a/PVE/QemuServer.pm +++ b/PVE/QemuServer.pm @@ -3854,7 +3854,7 @@ sub

[pve-devel] [PATCH pve-container 3/3] vzdump:lxc: sync and skip journal in snapshot mode

2015-09-25 Thread Wolfgang Bumiller
When using block device based snapshots we cannot mount the filesystem as it's not clean, and we also can't replay the journal without write access (as even `-o ro` writes to devices when replaying a journal (see the linux docs under Documentation/filesystems/ext4.txt section 3 option 'ro')). So

[pve-devel] [PATCH pve-container 2/3] vzdump:lxc: activate the right volumes

2015-09-25 Thread Wolfgang Bumiller
--- src/PVE/VZDump/LXC.pm | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/PVE/VZDump/LXC.pm b/src/PVE/VZDump/LXC.pm index 858db8f..a7fafe9 100644 --- a/src/PVE/VZDump/LXC.pm +++ b/src/PVE/VZDump/LXC.pm @@ -105,7 +105,6 @@ sub prepare { $task->{hostname} =

[pve-devel] [PATCH pve-container 1/3] mountpoint_mount: optional $extra_opts

2015-09-25 Thread Wolfgang Bumiller
--- src/PVE/LXC.pm | 18 +- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm index c01b401..c198eaf 100644 --- a/src/PVE/LXC.pm +++ b/src/PVE/LXC.pm @@ -2057,7 +2057,7 @@ my $check_mount_path = sub { # use $rootdir = undef to just

[pve-devel] [PATCH pve-storage] volume_snapshot_delete: deactivate before deleting

2015-09-25 Thread Wolfgang Bumiller
--- PVE/Storage.pm | 1 + 1 file changed, 1 insertion(+) diff --git a/PVE/Storage.pm b/PVE/Storage.pm index c27e9cf..e4f434a 100755 --- a/PVE/Storage.pm +++ b/PVE/Storage.pm @@ -200,6 +200,7 @@ sub volume_snapshot_delete { if ($storeid) { my $scfg = storage_config($cfg, $storeid);

[pve-devel] [PATCH v2 pve-storage] volume_snapshot_delete: deactivate before deleting

2015-09-25 Thread Wolfgang Bumiller
--- Changed: Moved the deactiave call from Storage.pm to the plugins as they are also the ones dealing with the $running parameter. PVE/Storage/Plugin.pm | 2 ++ PVE/Storage/RBDPlugin.pm | 2 ++ PVE/Storage/SheepdogPlugin.pm | 2 ++ PVE/Storage/ZFSPoolPlugin.pm | 1 + 4 files