>>Unless all cluster nodes have identical hardware, how do you determine if a
>>given node is a suitable target for a vm?
I think we could add a manual "host cpu weight" option, because it's difficult
to compare cpus performance. (frequencies/nb cores/intel|amd).
>>Also, should there be a
> Am 17.11.2015 um 08:23 schrieb Alexandre DERUMIER :
>
> Hi,
>
> For next year,
> I would like to implement a new feature : workload scheduler
>
> This could allow auto migration of vms, to try to balance the vms across the
> cluster,
> from defined rules (for example,
Hi,
the "bridge vlan show" with truncated message error with too much vlans,
has been fixed in kernel 4.4
http://marc.info/?l=linux-netdev=144488823001097=2
http://marc.info/?l=linux-netdev=144489933906491=2
When we'll use them in a fixed kernel (don't known if a backport to 4.2 is
planned) ,
> On November 17, 2015 at 4:37 PM Dietmar Maurer wrote:
>
>
> > >>Last but not least, how do you keep the load that migration generates from
> > >>impacting auto-migration decisions?
> >
> > Good question . I think we should use rrds to do average stats of cpu usage.
>
>> Do they apply cleanly to 4.2.X?
no, they need to be backported.
(don't seem to be hard, but need to be tested)
- Mail original -
De: "dietmar"
À: "aderumier" , "pve-devel"
Envoyé: Mardi 17 Novembre 2015 16:39:46
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 16
1 file changed, 16 insertions(+)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 81a1c84..87b7d20 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -30,6 +30,7 @@ use
> On November 17, 2015 at 4:43 PM Michael Rasmussen wrote:
>
>
> On Tue, 17 Nov 2015 11:01:58 +0100 (CET)
> Alexandre DERUMIER wrote:
>
> >
> > I'm thinked to use rrd files to have stats on a long time. (maybe do average
> > on last x minutes)
> >
>
> Besides fitting more with the declarative style of ExtJS, this has the
> interesting side effect of allowing comboboxes to work with ExtJS6
applied, but I wonder if there is an alternative/correct way to
do it inside initCompoment?
> - Ext.apply(me, {
> - displayField: 'value',
>
On Tue, 17 Nov 2015 11:01:58 +0100 (CET)
Alexandre DERUMIER wrote:
>
> I'm thinked to use rrd files to have stats on a long time. (maybe do average
> on last x minutes)
>
> (for example we don't want to migrate a vm if the host is overload for 1 or 2
> minutes,
> because
On Tue, 17 Nov 2015 11:01:58 +0100 (CET)
Alexandre DERUMIER wrote:
> yes, sure. I just want to start with cpu, but memory could be add too.
>
> I'm not sure for io-wait, as migrating the vm don't change the storage ?
>
Is that also the case for Gluster?
--
> >>Last but not least, how do you keep the load that migration generates from
> >>impacting auto-migration decisions?
>
> Good question . I think we should use rrds to do average stats of cpu usage.
I think that any load/cpu metric will be difficult(unstable).
I would simply use static values
> http://marc.info/?l=linux-netdev=144488823001097=2
> http://marc.info/?l=linux-netdev=144489933906491=2
>
> When we'll use them in a fixed kernel (don't known if a backport to 4.2 is
> planned) ,
Thanks for the links. Do they apply cleanly to 4.2.X?
This fixes the following problem:
After a manual browser refresh, widgets were displayed without data
---
www/manager6/dc/ACLView.js| 2 +-
www/manager6/dc/AuthView.js | 2 +-
www/manager6/dc/Backup.js | 2 +-
www/manager6/dc/GroupView.js | 2 +-
Reapply fix for https://bugzilla.proxmox.com/show_bug.cgi?id=716
which was missing in manager6/ directory
---
www/manager6/Toolkit.js | 8 +++-
www/manager6/dc/OptionView.js | 2 +-
www/manager6/dc/UserEdit.js | 2 +-
3 files changed, 9 insertions(+), 3 deletions(-)
diff --git
---
www/manager6/form/RealmComboBox.js | 8 +++-
1 file changed, 3 insertions(+), 5 deletions(-)
diff --git a/www/manager6/form/RealmComboBox.js
b/www/manager6/form/RealmComboBox.js
index 7e6700f..ce59422 100644
--- a/www/manager6/form/RealmComboBox.js
+++
Am 17.11.2015 um 14:56 schrieb Martin Waschbüsch:
Am 17.11.2015 um 14:20 schrieb Alexandre DERUMIER :
Unless all cluster nodes have identical hardware, how do you determine if a
given node is a suitable target for a vm?
I think we could add a manual "host cpu weight"
>>Also migrations could be blocked when a service is doing stuff like a
online backup.
block job too (like mirroring). Maybe just look if the vm have a lock should be
enough.
- Mail original -
De: "Thomas Lamprecht"
À: "pve-devel"
>>Good point. Though, I was more thinking about situations where the cpu-type
>>is not set to default (kvm64, I think?) but to something like 'IvyBridge' or
>>Opteron_G5. (The primary use I had for using non-default cpu types was >>to
>>expose features such as AES-NI to a vm.)
I think we
Nice!
On Tue, Nov 17, 2015 at 5:39 PM, Alexandre Derumier
wrote:
> Signed-off-by: Alexandre Derumier
> ---
> PVE/QemuServer.pm | 16
> 1 file changed, 16 insertions(+)
>
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index
I have sent another patch for qemu-kvm with roms,
I think it's pending in mailing list because of 2mb size
- Mail original -
De: "aderumier"
À: "pve-devel"
Envoyé: Mardi 17 Novembre 2015 17:39:51
Objet: [pve-devel] qemu-server : add
>> I think that any load/cpu metric will be difficult(unstable).
>> I would simply use static values like cpu count or max. memory.
>
>or static weight.
For my personnal usage, I don't think it'll work, because I have a lot of
differents workload at different time,
and some vms can use
> But indeed, dynamic stats is not easy to resolve.
>
>
> Ovirt has a planned feature which use optaplanner
>
> http://www.ovirt.org/Features/Optaplanner
> This seem to do some kind of heuristics and maths (out of my competence ;), to
> known which vm migrate.
IMHO you can always generate a
> This fixes the following problem:
> After a manual browser refresh, widgets were displayed without data
> listeners: {
> - show: reload
> + render: reload
> }
> });
We have a few places where we use:
fireEvent('show', ...)
Does that still
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>>I assume it is hard to get this stable (just a feeling).
yes, same for me.
>>On the other side, this would be simple
>>to implement. Each node is responsible to move its own VMs, so you do not
>>even
>>need a lock.
I was more thinking about a lock, to avoid node2 migrate vm to node1, when
Il 17 novembre 2015 08:40:19 CET, Dietmar Maurer ha
scritto:
>
>> What do you think about it ?
>
>interesting
>
>>
>> As we don't have master node, I don't known how to implement this:
>>
>> 1) each node try to migrate his own vms to another node with less cpu
>usage.
Hi all,
On 2015-11-17 08:23, Alexandre DERUMIER wrote:
What do you think about it ?
Sounds great, but I think memory and io-wait should be part of the list
as well.
As we don't have master node, I don't known how to implement this:
1) each node try to migrate his own vms to another
> What do you think about it ?
>
>>Sounds great, but I think memory and io-wait should be part of the list
>>as well.
yes, sure. I just want to start with cpu, but memory could be add too.
I'm not sure for io-wait, as migrating the vm don't change the storage ?
>>Why not keep it simple? You
> +if ($conf->{ovmf}) {
> + my $ovmfvar = "OVMF_VARS-pure-efi.fd";
> + my $ovmfvar_src = "/usr/share/kvm/$ovmfvar";
> + my $ovmfvar_dst = "/tmp/$vmid-$ovmfvar";
So we loose all EFI settings after reboot or migrate? (/tmp/ is cleared at
reboot)
fence.cfg will be used by the PVE HA manager for external fence
device configuration, this allows us to use cfs_read_file and
cfs_write_file methods.
Signed-off-by: Thomas Lamprecht
---
data/PVE/Cluster.pm | 1 +
1 file changed, 1 insertion(+)
diff --git
31 matches
Mail list logo