Alexandre Derumier
Ingénieur système et stockage
Fixe : 03 20 68 90 88
Fax : 03 20 68 90 81
45 Bvd du Général Leclerc 59100 Roubaix
12 rue Marivaux 75002 Paris
MonSiteEstLent.com - Blog dédié à la webperformance et la gestion de pics de
trafic
De: Cesar Peschiera
Hi Alexandre
Maybe the problem is in PVE, because:
A) When these 2 nodes has quorum (the light is green in PVE GUI), the VM
configured in HA not turns on.
B) After, i try to start manually the VM, and i get this error message:
Executing HA start for VM 109
Member pve5 trying to enable
can you post your /etc/network/interfaces of theses 10gb/s nodes ?
This is my configuration:
Note: The LAN use 192.100.100.0/24
#Network interfaces
auto lo
iface lo inet loopback
iface eth0 inet manual
iface eth1 inet manual
iface eth2 inet manual
iface eth3 inet manual
iface eth4 inet
need by my last pve-manager pending patches
I'm not sure, but maybe could we simply change /config api, to display pending
devices if they exist
___
pve-devel mailing list
pve-devel@pve.proxmox.com
get the current config with pending applied
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
PVE/API2/Qemu.pm | 88 ++
1 file changed, 88 insertions(+)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index fb36ce8..20c0438 100644
changelog :
-remove grid header
-fix icon alignment
-allow edit of pending devices
-adding new nic find next id including pending devices
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
fixme:
- add missing options from config file in store
- remove ObjectStore2.js
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
www/manager/Makefile |2 +
www/manager/data/ObjectStore2.js | 28
www/manager/data/PVEProxy.js |6 ++
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
www/manager/qemu/HardwareView.js |2 +-
www/manager/qemu/Options.js |2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/www/manager/qemu/HardwareView.js b/www/manager/qemu/HardwareView.js
index
maybe can you try to put 192.100.100.51 ip address directly to bond0,
to avoid corosync traffic going through to vmbr0.
(I remember some old offloading bugs with 10gbe nic and linux bridge)
- Mail original -
De: Cesar Peschiera br...@click.com.py
À: aderumier aderum...@odiso.com
Cc:
need by my last pve-manager pending patches
I'm not sure, but maybe could we simply change /config api, to display pending
devices if they exist
Yes, I think that is a good idea.
Currently (with pending patches), the semantics of the config API changed, which
is bad.
Instead, I thought
Instead, I thought about adding a new optional Boolean parameter called
'pending', used to include
pending changes. That way the config API remains stable.
Would you mind to adopt your patch?
Sure , no problem.
I'm not sure how to call the optionnal pending param in GET api url ?
-
Instead, I thought about adding a new optional Boolean parameter called
'pending', used to include
pending changes. That way the config API remains stable.
Would you mind to adopt your patch?
Sure , no problem.
I'm not sure how to call the optionnal pending param in GET api url ?
I
Instead, I thought about adding a new optional Boolean parameter called
'pending', used to include
pending changes. That way the config API remains stable.
Would you mind to adopt your patch?
Sure , no problem.
I'm not sure how to call the optionnal pending param in GET api
GET /path/config?pending=1
Ok, thanks.
I'll be on Xmas holiday next week, I'll try to work on it after that.
- Mail original -
De: dietmar diet...@proxmox.com
À: aderumier aderum...@odiso.com
Cc: pve-devel pve-devel@pve.proxmox.com
Envoyé: Vendredi 19 Décembre 2014 14:27:17
Objet: Re:
Hi,
While running plain Debian wheezy as a guest I've seen 95% of the guests
crashes after migration. (Screen frozen or black)
I do not see this with Debian Jessie or windows.
Does anybody know a workaround or solution? I have no control about the guests
so I need a qemu or host based
On Fri, 19 Dec 2014 19:10:50 +0100
Stefan Priebe - Profihost AG s.pri...@profihost.ag wrote:
Hi,
While running plain Debian wheezy as a guest I've seen 95% of the guests
crashes after migration. (Screen frozen or black)
I do not see this with Debian Jessie or windows.
Does anybody
On Fri, 19 Dec 2014 19:22:17 +0100
Michael Rasmussen m...@datanom.net wrote:
All my Debian servers are running wheezy and none of them has ever
crashed due to migration.
Adding to above: I have extended the Unix philosophy to servers as well
like this. On a server I only have one service
Am 19.12.2014 19:22, schrieb Michael Rasmussen:
On Fri, 19 Dec 2014 19:10:50 +0100
Stefan Priebe - Profihost AG s.pri...@profihost.ag wrote:
Hi,
While running plain Debian wheezy as a guest I've seen 95% of the guests
crashes after migration. (Screen frozen or black)
I do not see this with
Hello,
is there any reason to have kvmclock available for the guests?
There is -cpu ...,-kvmclock to remove kvmclock feature. For example
guests with 3.2 kernels are using tsc like newer kernels if kvmclock is
disabled.
There seem to be also a lot of reports while searching google of
is there any reason to have kvmclock available for the guests?
AFAIK that is the preferred clock with lowest overhead.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Am 19.12.2014 21:06, schrieb Dietmar Maurer:
is there any reason to have kvmclock available for the guests?
AFAIK that is the preferred clock with lowest overhead.
tsc has def. lower overhead on newer kernels. That's also why every
kernel 3.8 prefers tsc. So it's right for CPUs which do
On Fri, 19 Dec 2014 20:02:29 +0100
Stefan Priebe s.pri...@profihost.ag wrote:
OK - nice to hear this. I'm still on qemu 2.1.2 - have you already upgraded
to 2.2.0?
pve-qemu-kvm: 2.1-10
Host is Dual E5-2695 v2 running 3.10.63 kernel.
I am on 2.6.32 kernel.
22 matches
Mail list logo