Add an exists helper which returns service specific if the resource
is anywhere on the cluster, i.e. can be added.
Also add a check if a service is ha managed.
Signed-off-by: Thomas Lamprecht
---
src/PVE/HA/Config.pm| 14 ++
src/PVE/HA/Resources.pm | 20
---
src/PVE/API2/LXC/Config.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/PVE/API2/LXC/Config.pm b/src/PVE/API2/LXC/Config.pm
index cd1a9cd..d43489f 100644
--- a/src/PVE/API2/LXC/Config.pm
+++ b/src/PVE/API2/LXC/Config.pm
@@ -261,7 +261,7 @@
This fixes:
-) addition of an inexistent VM/CT
-) migration, relocation, deletion of a resource which was not ha
managed
-) deletion of a non existent group
-) a typo (s/storage/resource/)
Signed-off-by: Thomas Lamprecht
---
src/PVE/API2/HA/Groups.pm| 3 +++
> - $mp->{size} = $newsize/1024; # kB
> + $mp->{size} = $newsize; # kB
makes no sense to me - why exactly?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
While I can see the bug, the fix looks incorrect too me.
PVE::Tools::file_set_contents is atomic, so it never deletes the file.
So maybe this is a bug inside pmxcfs. I would be great to have a test case
which triggers that bug.
On 10/12/2015 10:36 AM, Thomas Lamprecht wrote:
Reading and
On Mon, Oct 12, 2015 at 10:51:29AM +0200, Emmanuel Kasper wrote:
> * the filesystem specific command will be called automatically by fsck (at the
> moment ext4)
> * the -y flag ensures that the filesystem can be fixed automcatically in
> a non-interactive session
> * the -f flag forces a
Changed:
* add option to check extra mount points, but skip bind mounts
* add -l option to fsck to lock the device during fsck, preventing
multiple fsck to be run on the same device
( idea taken from systemd fsck service)
___
pve-devel mailing list
* the filesystem specific command will be called automatically by fsck (at the
moment ext4)
* the -y flag ensures that the filesystem can be fixed automcatically in
a non-interactive session
* the -f flag forces a filesystem check even if the fs seems clean
---
src/PVE/CLI/pct.pm | 65
This patches allows to configure RRP (= redundant ring protocol)
at cluster creation time. Also setting ring 0 and 1 addresses when
adding a new node. This helps and fixes some bugs when corosync runs
completely separated on an own network.
Changing rrp configs, or the bindnet addresses
That code looks quite similar to vm_is_ha_managed(). Is it possible to
improve code reuse?
On 10/12/2015 03:04 PM, Thomas Lamprecht wrote:
add a check for a given $sid if it's managed by the ha stack
Signed-off-by: Thomas Lamprecht
---
src/PVE/HA/Config.pm | 16
> I don't think parsing and bind-mount check need to be part of the 'if'
> clause. Checking in case of the rootfs, too, doesn't harm, and you might
> want to be able to put a rootfs on a block device, too. (Though this
> currently doesn't seem to be possible).
>
> Also, while you do want to skip
Signed-off-by: Thomas Lamprecht
---
src/PVE/API2/HA/Resources.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/PVE/API2/HA/Resources.pm b/src/PVE/API2/HA/Resources.pm
index 93383aa..fc34433 100644
--- a/src/PVE/API2/HA/Resources.pm
+++
Signed-off-by: Thomas Lamprecht
---
src/PVE/HA/Resources.pm | 10 ++
1 file changed, 10 insertions(+)
diff --git a/src/PVE/HA/Resources.pm b/src/PVE/HA/Resources.pm
index c41fa91..c415c3c 100644
--- a/src/PVE/HA/Resources.pm
+++ b/src/PVE/HA/Resources.pm
@@
add a check for a given $sid if it's managed by the ha stack
Signed-off-by: Thomas Lamprecht
---
src/PVE/HA/Config.pm | 16
1 file changed, 16 insertions(+)
diff --git a/src/PVE/HA/Config.pm b/src/PVE/HA/Config.pm
index 0a6dfa5..d6d974d 100644
---
Add a helper to the resource class which returns service specific if
the resource exists on the cluster, i.e. can be added.
Signed-off-by: Thomas Lamprecht
---
src/PVE/HA/Resources.pm | 32
1 file changed, 32 insertions(+)
diff --git
applied, thanks!
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
---
Makefile | 2 +-
.../0001-worker-validate-correctly-surfaces.patch | 117 +
...d-double-free-or-double-create-of-surface.patch | 41
...efine-a-constant-to-limit-data-from-guest.patch | 42
I have tried to shutodown apparmor service, doesn't help.
setting aio=threads fix the problem.
- Mail original -
De: "aderumier"
À: "pve-devel"
Envoyé: Lundi 12 Octobre 2015 18:53:48
Objet: [pve-devel] proxmox 4.0 : "Could not set AIO
>>Wild guess - maybe it is related to INotify?
>
>># cat /proc/sys/fs/inotify/max_user_instances
>>
>>We already run into that limit with LXC containers recently.
>>Please can you test?
Yes, this is working with increasing /proc/sys/fs/inotify/max_user_instances !
Thanks.
This is
> On October 12, 2015 at 6:53 PM Alexandre DERUMIER wrote:
>
>
> Hi,
>
> I have upgraded a server with a lot of vm (160vms),
>
> each vms is a clone of a template and they are 1 disk by vm
>
> When I started the 130th vm, I have this error
>
> kvm: -drive
>
Sorry,
I speak to fast, doesn't seem to resolve the problem.
It has worked for 1vm (don't known why), but the others don't start.
(I have increase the counter to 1 to be sure)
I'll check that tommorrow
- Mail original -
De: "aderumier"
À: "dietmar"
Hi,
I have upgraded a server with a lot of vm (160vms),
each vms is a clone of a template and they are 1 disk by vm
When I started the 130th vm, I have this error
kvm: -drive
maybe is it related :
proxmox4 : node1
root@kvmmind1:~# cat /proc/sys/fs/aio-nr
116736
root@kvmmind1:~# cat /proc/sys/fs/aio-max-nr
65536
proxmox4 : node2 (where I have the problem)
root@kvmmind2:~# cat /proc/sys/fs/aio-nr
131072
root@kvmmind2:~# cat /proc/sys/fs/aio-max-nr
65536
Hi,
increasing /proc/sys/fs/aio-max-nr fix the problem.
I see that libvirt increase it by default to 1 million vs 65000
http://libvirt.org/git/?p=libvirt.git;a=commit;h=5298551e07a9839c046e0987b325e03f8ba801e5
commit said that they are no penalty to increase this value by default.
I don't
>>I don't known yet why more aio requests are used (qemu version maybe ?).
testing qemu 2.4 backported to proxmox 3, I'm seeing 256 aio request by disk
qemu 2.4 on proxmox 4, I'm seeing 1024 aio request by disk.
- Mail original -
De: "aderumier"
À: "pve-devel"
On 13 October 2015 at 11:14, Alexandre DERUMIER wrote:
> testing qemu 2.4 backported to proxmox 3, I'm seeing 256 aio request by
> disk
Sorry to segway in, but how do you get qemu 2.4 for Proxmox 3?
--
Lindsay
___
pve-devel
> On October 13, 2015 at 6:44 AM Dietmar Maurer wrote:
>
>
>
>
> > On October 13, 2015 at 3:03 AM Alexandre DERUMIER
> > wrote:
> >
> >
> > Hi,
> >
> > increasing /proc/sys/fs/aio-max-nr fix the problem.
>
> but 128*128 = 16384, so why do you
> On October 13, 2015 at 3:03 AM Alexandre DERUMIER wrote:
>
>
> Hi,
>
> increasing /proc/sys/fs/aio-max-nr fix the problem.
but 128*128 = 16384, so why do you reach the limit 65536? You have
more than one disk per VM?
___
28 matches
Mail list logo