[pve-devel] [PATCH pve-ha-manager 1/2] Add service existence and state check helpers

2015-10-12 Thread Thomas Lamprecht
Add an exists helper which returns service specific if the resource is anywhere on the cluster, i.e. can be added. Also add a check if a service is ha managed. Signed-off-by: Thomas Lamprecht --- src/PVE/HA/Config.pm| 14 ++ src/PVE/HA/Resources.pm | 20

[pve-devel] [PATCH pve-container] fix bug #752: correct size from mount point

2015-10-12 Thread Wolfgang Link
--- src/PVE/API2/LXC/Config.pm | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/PVE/API2/LXC/Config.pm b/src/PVE/API2/LXC/Config.pm index cd1a9cd..d43489f 100644 --- a/src/PVE/API2/LXC/Config.pm +++ b/src/PVE/API2/LXC/Config.pm @@ -261,7 +261,7 @@

[pve-devel] [PATCH pve-ha-manager 2/2] check services better to avoid unknown behaviour

2015-10-12 Thread Thomas Lamprecht
This fixes: -) addition of an inexistent VM/CT -) migration, relocation, deletion of a resource which was not ha managed -) deletion of a non existent group -) a typo (s/storage/resource/) Signed-off-by: Thomas Lamprecht --- src/PVE/API2/HA/Groups.pm| 3 +++

Re: [pve-devel] [PATCH pve-container] fix bug #752: correct size from mount point

2015-10-12 Thread Dietmar Maurer
> - $mp->{size} = $newsize/1024; # kB > + $mp->{size} = $newsize; # kB makes no sense to me - why exactly? ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Re: [pve-devel] [PATCH pve-ha-manager] fix race condition on lrm_status read/write

2015-10-12 Thread Dietmar Maurer
While I can see the bug, the fix looks incorrect too me. PVE::Tools::file_set_contents is atomic, so it never deletes the file. So maybe this is a bug inside pmxcfs. I would be great to have a test case which triggers that bug. On 10/12/2015 10:36 AM, Thomas Lamprecht wrote: Reading and

Re: [pve-devel] [PATCH v4 pve-container] Add new pct fsck command, to check the rootfs of a container for consistency

2015-10-12 Thread Wolfgang Bumiller
On Mon, Oct 12, 2015 at 10:51:29AM +0200, Emmanuel Kasper wrote: > * the filesystem specific command will be called automatically by fsck (at the > moment ext4) > * the -y flag ensures that the filesystem can be fixed automcatically in > a non-interactive session > * the -f flag forces a

[pve-devel] add pct fcsk command

2015-10-12 Thread Emmanuel Kasper
Changed: * add option to check extra mount points, but skip bind mounts * add -l option to fsck to lock the device during fsck, preventing multiple fsck to be run on the same device ( idea taken from systemd fsck service) ___ pve-devel mailing list

[pve-devel] [PATCH v4 pve-container] Add new pct fsck command, to check the rootfs of a container for consistency

2015-10-12 Thread Emmanuel Kasper
* the filesystem specific command will be called automatically by fsck (at the moment ext4) * the -y flag ensures that the filesystem can be fixed automcatically in a non-interactive session * the -f flag forces a filesystem check even if the fs seems clean --- src/PVE/CLI/pct.pm | 65

[pve-devel] [PATCH v4 pve-cluster] improve RRP support and use 'name' subkey as default

2015-10-12 Thread Thomas Lamprecht
This patches allows to configure RRP (= redundant ring protocol) at cluster creation time. Also setting ring 0 and 1 addresses when adding a new node. This helps and fixes some bugs when corosync runs completely separated on an own network. Changing rrp configs, or the bindnet addresses

Re: [pve-devel] [PATCH v2 pve-ha-manager 1/5] Add 'service is ha managed' check

2015-10-12 Thread Dietmar Maurer
That code looks quite similar to vm_is_ha_managed(). Is it possible to improve code reuse? On 10/12/2015 03:04 PM, Thomas Lamprecht wrote: add a check for a given $sid if it's managed by the ha stack Signed-off-by: Thomas Lamprecht --- src/PVE/HA/Config.pm | 16

Re: [pve-devel] [PATCH v4 pve-container] Add new pct fsck command, to check the rootfs of a container for consistency

2015-10-12 Thread Emmanuel Kasper
> I don't think parsing and bind-mount check need to be part of the 'if' > clause. Checking in case of the rootfs, too, doesn't harm, and you might > want to be able to put a rootfs on a block device, too. (Though this > currently doesn't seem to be possible). > > Also, while you do want to skip

[pve-devel] [PATCH v2 pve-ha-manager 5/5] fix typo in error message s/storage/resource/

2015-10-12 Thread Thomas Lamprecht
Signed-off-by: Thomas Lamprecht --- src/PVE/API2/HA/Resources.pm | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/PVE/API2/HA/Resources.pm b/src/PVE/API2/HA/Resources.pm index 93383aa..fc34433 100644 --- a/src/PVE/API2/HA/Resources.pm +++

[pve-devel] [PATCH v2 pve-ha-manager 2/5] Document parameters in parent class

2015-10-12 Thread Thomas Lamprecht
Signed-off-by: Thomas Lamprecht --- src/PVE/HA/Resources.pm | 10 ++ 1 file changed, 10 insertions(+) diff --git a/src/PVE/HA/Resources.pm b/src/PVE/HA/Resources.pm index c41fa91..c415c3c 100644 --- a/src/PVE/HA/Resources.pm +++ b/src/PVE/HA/Resources.pm @@

[pve-devel] [PATCH v2 pve-ha-manager 1/5] Add 'service is ha managed' check

2015-10-12 Thread Thomas Lamprecht
add a check for a given $sid if it's managed by the ha stack Signed-off-by: Thomas Lamprecht --- src/PVE/HA/Config.pm | 16 1 file changed, 16 insertions(+) diff --git a/src/PVE/HA/Config.pm b/src/PVE/HA/Config.pm index 0a6dfa5..d6d974d 100644 ---

[pve-devel] [PATCH v2 pve-ha-manager 3/5] Add resource existence check helper

2015-10-12 Thread Thomas Lamprecht
Add a helper to the resource class which returns service specific if the resource exists on the cluster, i.e. can be added. Signed-off-by: Thomas Lamprecht --- src/PVE/HA/Resources.pm | 32 1 file changed, 32 insertions(+) diff --git

Re: [pve-devel] [PATCH] fix bug #750: deactivate volumes to be sure there are no volumes active on the source node

2015-10-12 Thread Dietmar Maurer
applied, thanks! ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

[pve-devel] [PATCH pve-libspice-server] add debian fixes for DSA-3371-1

2015-10-12 Thread Wolfgang Link
--- Makefile | 2 +- .../0001-worker-validate-correctly-surfaces.patch | 117 + ...d-double-free-or-double-create-of-surface.patch | 41 ...efine-a-constant-to-limit-data-from-guest.patch | 42

Re: [pve-devel] proxmox 4.0 : "Could not set AIO state: File descriptor in bad state" qemu error with more than 129 kvm (129 disks)

2015-10-12 Thread Alexandre DERUMIER
I have tried to shutodown apparmor service, doesn't help. setting aio=threads fix the problem. - Mail original - De: "aderumier" À: "pve-devel" Envoyé: Lundi 12 Octobre 2015 18:53:48 Objet: [pve-devel] proxmox 4.0 : "Could not set AIO

Re: [pve-devel] proxmox 4.0 : "Could not set AIO state: File descriptor in bad state" qemu error with more than 129 kvm (129 disks)

2015-10-12 Thread Alexandre DERUMIER
>>Wild guess - maybe it is related to INotify? > >># cat /proc/sys/fs/inotify/max_user_instances >> >>We already run into that limit with LXC containers recently. >>Please can you test? Yes, this is working with increasing /proc/sys/fs/inotify/max_user_instances ! Thanks. This is

Re: [pve-devel] proxmox 4.0 : "Could not set AIO state: File descriptor in bad state" qemu error with more than 129 kvm (129 disks)

2015-10-12 Thread Dietmar Maurer
> On October 12, 2015 at 6:53 PM Alexandre DERUMIER wrote: > > > Hi, > > I have upgraded a server with a lot of vm (160vms), > > each vms is a clone of a template and they are 1 disk by vm > > When I started the 130th vm, I have this error > > kvm: -drive >

Re: [pve-devel] proxmox 4.0 : "Could not set AIO state: File descriptor in bad state" qemu error with more than 129 kvm (129 disks)

2015-10-12 Thread Alexandre DERUMIER
Sorry, I speak to fast, doesn't seem to resolve the problem. It has worked for 1vm (don't known why), but the others don't start. (I have increase the counter to 1 to be sure) I'll check that tommorrow - Mail original - De: "aderumier" À: "dietmar"

[pve-devel] proxmox 4.0 : "Could not set AIO state: File descriptor in bad state" qemu error with more than 129 kvm (129 disks)

2015-10-12 Thread Alexandre DERUMIER
Hi, I have upgraded a server with a lot of vm (160vms), each vms is a clone of a template and they are 1 disk by vm When I started the 130th vm, I have this error kvm: -drive

Re: [pve-devel] proxmox 4.0 : "Could not set AIO state: File descriptor in bad state" qemu error with more than 129 kvm (129 disks)

2015-10-12 Thread Alexandre DERUMIER
maybe is it related : proxmox4 : node1 root@kvmmind1:~# cat /proc/sys/fs/aio-nr 116736 root@kvmmind1:~# cat /proc/sys/fs/aio-max-nr 65536 proxmox4 : node2 (where I have the problem) root@kvmmind2:~# cat /proc/sys/fs/aio-nr 131072 root@kvmmind2:~# cat /proc/sys/fs/aio-max-nr 65536

Re: [pve-devel] proxmox 4.0 : "Could not set AIO state: File descriptor in bad state" qemu error with more than 129 kvm (129 disks)

2015-10-12 Thread Alexandre DERUMIER
Hi, increasing /proc/sys/fs/aio-max-nr fix the problem. I see that libvirt increase it by default to 1 million vs 65000 http://libvirt.org/git/?p=libvirt.git;a=commit;h=5298551e07a9839c046e0987b325e03f8ba801e5 commit said that they are no penalty to increase this value by default. I don't

Re: [pve-devel] proxmox 4.0 : "Could not set AIO state: File descriptor in bad state" qemu error with more than 129 kvm (129 disks)

2015-10-12 Thread Alexandre DERUMIER
>>I don't known yet why more aio requests are used (qemu version maybe ?). testing qemu 2.4 backported to proxmox 3, I'm seeing 256 aio request by disk qemu 2.4 on proxmox 4, I'm seeing 1024 aio request by disk. - Mail original - De: "aderumier" À: "pve-devel"

Re: [pve-devel] proxmox 4.0 : "Could not set AIO state: File descriptor in bad state" qemu error with more than 129 kvm (129 disks)

2015-10-12 Thread Lindsay Mathieson
On 13 October 2015 at 11:14, Alexandre DERUMIER wrote: > testing qemu 2.4 backported to proxmox 3, I'm seeing 256 aio request by > disk Sorry to segway in, but how do you get qemu 2.4 for Proxmox 3? -- Lindsay ___ pve-devel

Re: [pve-devel] proxmox 4.0 : "Could not set AIO state: File descriptor in bad state" qemu error with more than 129 kvm (129 disks)

2015-10-12 Thread Dietmar Maurer
> On October 13, 2015 at 6:44 AM Dietmar Maurer wrote: > > > > > > On October 13, 2015 at 3:03 AM Alexandre DERUMIER > > wrote: > > > > > > Hi, > > > > increasing /proc/sys/fs/aio-max-nr fix the problem. > > but 128*128 = 16384, so why do you

Re: [pve-devel] proxmox 4.0 : "Could not set AIO state: File descriptor in bad state" qemu error with more than 129 kvm (129 disks)

2015-10-12 Thread Dietmar Maurer
> On October 13, 2015 at 3:03 AM Alexandre DERUMIER wrote: > > > Hi, > > increasing /proc/sys/fs/aio-max-nr fix the problem. but 128*128 = 16384, so why do you reach the limit 65536? You have more than one disk per VM? ___