Currently,if we don't have a machine option in running config,
and we take a vmstate snapshot
the machine option is write in the snapshot (ok), but also in the running
config (bad)
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
PVE/QemuServer.pm |3 ++-
1 file changed, 2
applied, thanks!
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
-} else { # hotplug new disks
-
+} elsif (!$old_volid) { # hotplug new disks
die error hotplug $opt if !PVE::QemuServer::vm_deviceplug($storecfg,
$conf, $vmid, $opt, $drive);
}
This does not display any errors if $old_volid is set?
I think we should raise an error to
Good job Dietmar
But, what about of your plans of work between us for conclude the
translations?
- Original Message -
From: Dietmar Maurer diet...@proxmox.com
To: Cesar Peschiera br...@click.com.py
Sent: Friday, August 29, 2014 2:11 AM
Subject: RE: [pve-devel] Better translate to
Currently,if we don't have a machine option in running config, and we take a
vmstate snapshot
the machine option is write in the snapshot (ok), but also in the running
config
(bad)
Yes, we should fix this when we create a snapshot.
But your patch also affect rollback, where we have
applied, thanks!
same than the resourcetree
or we can create linked clone from the grid
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
But, what about of your plans of work between us for conclude the
translations?
I would like to work on that, but I have simply no time to do that work now.
Maybe someone else is
interested to work on those gettext() fixes?
___
pve-devel mailing
Hi Dietmar
Do you remember that at starting we talk about of working only in the part
that will be easy for you?
- Original Message -
From: Dietmar Maurer diet...@proxmox.com
To: Cesar Peschiera br...@click.com.py; pve-devel@pve.proxmox.com
Sent: Friday, August 29, 2014 3:11 AM
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
PVE/QemuServer.pm |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index e00a063..ad2aacc 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -4810,7 +4810,7 @@ sub
For example, in snapshot_rollback(),$forcemachine is wrong because you do not
copy machine
config with snapshot_apply_config()
I just send a patch,
we just need to take $forcemachine from snapshot machine value, and not current
config.
- Mail original -
De: Alexandre
what about this:
} else { # hotplug new disks
+ die some useful error mesage if $old_volid;
die error hotplug $opt if !PVE::QemuServer::vm_deviceplug($storecfg,
$conf, $vmid, $opt, $drive);
}
}
-Original Message-
From: Alexandre DERUMIER
What do you thing about this (looks more obvious to me)?
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 327ea35..b4358b0 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -4741,6 +4741,8 @@ my $snapshot_commit = sub {
die missing snapshot lock\n
if
what about this:
} else { # hotplug new disks
+ die some useful error mesage if $old_volid;
die error hotplug $opt if
!PVE::QemuServer::vm_deviceplug($storecfg, $conf, $vmid, $opt, $drive);
}
}
The problem is that we are in $vmconfig_update_disk(),
so it'll die if we
What do you thing about this (looks more obvious to me)?
Works fine here !
Thanks.
- Mail original -
De: Dietmar Maurer diet...@proxmox.com
À: Alexandre DERUMIER aderum...@odiso.com
Cc: pve-devel@pve.proxmox.com
Envoyé: Vendredi 29 Août 2014 10:21:32
Objet: RE: [pve-devel] [PATCH]
We need also to check if the vm is running and hotplug is enabled
(because without hotplug, we are allow to replace a disk with another,
the code manage currently the swap, putting the first disk as unused)
something like:
elsif (PVE::QemuServer::check_running($vmid) $conf-{hotplug}) { #
if($old_volid){
die you need to remove current disk before hotplug it if
$old_volid ne $volid;
also, I think $old_voldid is put as unused before in update_disk
if (!PVE::QemuServer::drive_is_cdrom($old_drive)
($drive-{file} ne $old_drive-{file})) { # delete old
Ok, this one fix hotpluging of disks (scsi),
and allow to edit options too.
So, I think no need to change Qemu.pm update_disk
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
This allow scsi disk to be plug|unplug
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
PVE/QemuServer.pm |9 -
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index b4358b0..2058131 100644
--- a/PVE/QemuServer.pm
+++
Forwading to pve-devel mailing
- Mail transféré -
De: Alexandre DERUMIER aderum...@odiso.com
À: Christian Tari christ...@zaark.com
Envoyé: Vendredi 29 Août 2014 15:08:49
Objet: Re: [pve-devel] successfull migration but failed resume
Can it lead issues if we migrate between two
I might be able to do some tests but I have to take this E5-2640 server out
from this production cluster and create a new test cluster. It takes some
days until I rearrange things. If that’s fine Im okay.
Does this mean I have to re-install proxmox 3.1 on both cluster nodes?
If you remove node
Note, I just receive some new opteron servers, so I'll do tests next week :)
- Mail original -
De: Alexandre DERUMIER aderum...@odiso.com
À: Christian Tari christ...@zaark.com
Cc: pve-devel@pve.proxmox.com
Envoyé: Vendredi 29 Août 2014 16:14:09
Objet: Re: [pve-devel] successfull
On Fri, 29 Aug 2014 17:11:08 +0200 (CEST)
Alexandre DERUMIER aderum...@odiso.com wrote:
Note, I just receive some new opteron servers, so I'll do tests next week :)
As mentioned before I had the same problems migrating from Opteron to
Phenom and Athlon II based CPUs.
From which CPU generation
From which CPU generation has AMD introduced the cpu flag xsave?
I see it on Opteron 63XX , but not 61XX.
BTW, does it work for you with current 3.10 kernel ? (which don't have the
xsave patch yet)
- Mail original -
De: Michael Rasmussen m...@datanom.net
À:
On Fri, 29 Aug 2014 17:23:31 +0200 (CEST)
Alexandre DERUMIER aderum...@odiso.com wrote:
From which CPU generation has AMD introduced the cpu flag xsave?
I see it on Opteron 63XX , but not 61XX.
Just found it here: Family 15h and up.
https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2013-2076
Ohhh Ditemar, I didn't know that you thought so. So excuse me if I am
causing inconvenience, but as you told me that we can work in the easy part
of the translation, then I assumed that you would not have problems.
By other hand, i know that you are very occupied with important changes in
25 matches
Mail list logo