[pve-devel] [PATCH] vmstate snapshot : do no write machine to running config

2014-08-29 Thread Alexandre Derumier
Currently,if we don't have a machine option in running config,
and we take a vmstate snapshot

the machine option is write in the snapshot (ok), but also in the running 
config (bad)

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/QemuServer.pm |3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 327ea35..e00a063 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -4596,6 +4596,7 @@ my $snapshot_copy_config = sub {
next if $k eq 'lock';
next if $k eq 'digest';
next if $k eq 'description';
+   next if $k eq 'machine';
next if $k =~ m/^unused\d+$/;
 
$dest-{$k} = $source-{$k};
@@ -4612,7 +4613,7 @@ my $snapshot_apply_config = sub {
 
 # keep description and list of unused disks
 foreach my $k (keys %$conf) {
-   next if !($k =~ m/^unused\d+$/ || $k eq 'description');
+   next if !($k =~ m/^unused\d+$/ || $k eq 'description' || $k eq 
'machine');
$newconf-{$k} = $conf-{$k};
 }
 
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] allow hotplug of virtio-scsi disks

2014-08-29 Thread Dietmar Maurer
applied, thanks!


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] don't try to hotplug disk if a disk already exist.

2014-08-29 Thread Dietmar Maurer
 -} else { # hotplug new disks
 -
 +} elsif (!$old_volid) { # hotplug new disks
   die error hotplug $opt if !PVE::QemuServer::vm_deviceplug($storecfg,
 $conf, $vmid, $opt, $drive);
  }

This does not display any errors if $old_volid is set?
I think we should raise an error to indicate that something went wrong?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Better translate to Spanish language

2014-08-29 Thread Cesar Peschiera

Good job Dietmar

But, what about of your plans of work between us for conclude the
translations?

- Original Message - 
From: Dietmar Maurer diet...@proxmox.com

To: Cesar Peschiera br...@click.com.py
Sent: Friday, August 29, 2014 2:11 AM
Subject: RE: [pve-devel] Better translate to Spanish language



Here i send you a image of the problem (Into the option add users, in
spanish
language, and after of do my translation)


Looks like a bug in the ExtJS layout engine. But I found a simple
workaround:

https://git.proxmox.com/?p=pve-manager.git;a=commitdiff;h=36ec32ec2684962890755d7b92a0f6587f559c86

- Dietmar



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] vmstate snapshot : do no write machine to running config

2014-08-29 Thread Dietmar Maurer

 Currently,if we don't have a machine option in running config, and we take a
 vmstate snapshot
 
 the machine option is write in the snapshot (ok), but also in the running 
 config
 (bad)

Yes, we should fix this when we create a snapshot.

But your patch also affect rollback, where we have special code to restore 
machine config.

For example, in snapshot_rollback(),$forcemachine is wrong because you do not 
copy machine
config with snapshot_apply_config() 

or do I miss something?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] resourcegrid : add template menu

2014-08-29 Thread Dietmar Maurer
applied, thanks!

 same than the resourcetree
 or we can create linked clone from the grid

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Better translate to Spanish language

2014-08-29 Thread Dietmar Maurer
 But, what about of your plans of work between us for conclude the 
 translations?

I would like to work on that, but I have simply no time to do that work now. 
Maybe someone else is
interested to work on those gettext() fixes?


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Better translate to Spanish language

2014-08-29 Thread Cesar Peschiera

Hi Dietmar

Do you remember that at starting we talk about of working only in the part 
that will be easy for you?



- Original Message - 
From: Dietmar Maurer diet...@proxmox.com

To: Cesar Peschiera br...@click.com.py; pve-devel@pve.proxmox.com
Sent: Friday, August 29, 2014 3:11 AM
Subject: RE: [pve-devel] Better translate to Spanish language


But, what about of your plans of work between us for conclude the 
translations?


I would like to work on that, but I have simply no time to do that work now. 
Maybe someone else is

interested to work on those gettext() fixes?



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] snapshot_rollback : forcemachine config from snapshot config

2014-08-29 Thread Alexandre Derumier

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/QemuServer.pm |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index e00a063..ad2aacc 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -4810,7 +4810,7 @@ sub snapshot_rollback {
 
# Note: old code did not store 'machine', so we try to be smart
# and guess the snapshot was generated with kvm 1.4 (pc-i440fx-1.4).
-   $forcemachine = $conf-{machine} || 'pc-i440fx-1.4';
+   $forcemachine = $snap-{machine} || 'pc-i440fx-1.4';
# we remove the 'machine' configuration if not explicitly specified
# in the original config.
delete $conf-{machine} if $snap-{vmstate}  !$has_machine_config;
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] vmstate snapshot : do no write machine to running config

2014-08-29 Thread Alexandre DERUMIER
For example, in snapshot_rollback(),$forcemachine is wrong because you do not 
copy machine 
config with snapshot_apply_config()  

I just send a patch,

we just need to take $forcemachine from snapshot machine value, and not current 
config.



- Mail original - 

De: Alexandre DERUMIER aderum...@odiso.com 
À: Dietmar Maurer diet...@proxmox.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Vendredi 29 Août 2014 09:25:46 
Objet: Re: [pve-devel] [PATCH] vmstate snapshot : do no write machine to 
running config 

For example, in snapshot_rollback(),$forcemachine is wrong because you do not 
copy machine 
config with snapshot_apply_config()  
 
or do I miss something? 

Ok, I'll check that ! 

- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Alexandre Derumier aderum...@odiso.com, pve-devel@pve.proxmox.com 
Envoyé: Vendredi 29 Août 2014 08:58:35 
Objet: RE: [pve-devel] [PATCH] vmstate snapshot : do no write machine to 
running config 


 Currently,if we don't have a machine option in running config, and we take 
 a 
 vmstate snapshot 
 
 the machine option is write in the snapshot (ok), but also in the running 
 config 
 (bad) 

Yes, we should fix this when we create a snapshot. 

But your patch also affect rollback, where we have special code to restore 
machine config. 

For example, in snapshot_rollback(),$forcemachine is wrong because you do not 
copy machine 
config with snapshot_apply_config()  

or do I miss something? 
___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] don't try to hotplug disk if a disk already exist.

2014-08-29 Thread Dietmar Maurer
what about this:

} else { # hotplug new disks
+   die some useful error mesage if $old_volid;
die error hotplug $opt if !PVE::QemuServer::vm_deviceplug($storecfg, 
$conf, $vmid, $opt, $drive);
}
}

 -Original Message-
 From: Alexandre DERUMIER [mailto:aderum...@odiso.com]
 Sent: Freitag, 29. August 2014 09:25
 To: Dietmar Maurer
 Cc: pve-devel@pve.proxmox.com
 Subject: Re: [pve-devel] [PATCH] don't try to hotplug disk if a disk already 
 exist.
 
 This does not display any errors if $old_volid is set?
 I think we should raise an error to indicate that something went wrong?
 
 
 
 
 Maybe
 
 elsif (!$old_volid) { # hotplug new disks
die error hotplug $opt if !PVE::QemuServer::vm_deviceplug($storecfg,
 $conf, $vmid, $opt, $drive);
 
 }elseif ($old_voldid  $old_voldid ne $new_volid {
   raise an error ?
 }
 
 
 ?
 
 - Mail original -
 
 De: Dietmar Maurer diet...@proxmox.com
 À: Alexandre Derumier aderum...@odiso.com, pve-
 de...@pve.proxmox.com
 Envoyé: Vendredi 29 Août 2014 08:29:00
 Objet: RE: [pve-devel] [PATCH] don't try to hotplug disk if a disk already 
 exist.
 
  - } else { # hotplug new disks
  -
  + } elsif (!$old_volid) { # hotplug new disks
  die error hotplug $opt if !PVE::QemuServer::vm_deviceplug($storecfg,
  $conf, $vmid, $opt, $drive);
  }
 
 This does not display any errors if $old_volid is set?
 I think we should raise an error to indicate that something went wrong?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] vmstate snapshot : do no write machine to running config

2014-08-29 Thread Dietmar Maurer
What do you thing about this (looks more obvious to me)?

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 327ea35..b4358b0 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -4741,6 +4741,8 @@ my $snapshot_commit = sub {
die missing snapshot lock\n
if !($conf-{lock}  $conf-{lock} eq 'snapshot');
 
+   my $has_machine_config = defined($conf-{machine});
+
my $snap = $conf-{snapshots}-{$snapname};
 
die snapshot '$snapname' does not exist\n if !defined($snap);
@@ -4753,6 +4755,8 @@ my $snapshot_commit = sub {
 
my $newconf = $snapshot_apply_config($conf, $snap);
 
+   delete $newconf-{machine} if !$has_machine_config;
+
$newconf-{parent} = $snapname;
 
update_config_nolock($vmid, $newconf, 1);

 For example, in snapshot_rollback(),$forcemachine is wrong because you
 do not copy machine config with snapshot_apply_config() 
 
 I just send a patch,
 
 we just need to take $forcemachine from snapshot machine value, and not
 current config.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] don't try to hotplug disk if a disk already exist.

2014-08-29 Thread Alexandre DERUMIER
what about this:

} else { # hotplug new disks
+ die some useful error mesage if $old_volid;
die error hotplug $opt if 
 !PVE::QemuServer::vm_deviceplug($storecfg, $conf, $vmid, $opt, $drive);
}
}

The problem is that we are in $vmconfig_update_disk(),

so it'll die if we try to update any parameters (disk throttle,discard,backup).


- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Vendredi 29 Août 2014 10:11:18 
Objet: RE: [pve-devel] [PATCH] don't try to hotplug disk if a disk already 
exist. 

what about this: 

} else { # hotplug new disks 
+ die some useful error mesage if $old_volid; 
die error hotplug $opt if !PVE::QemuServer::vm_deviceplug($storecfg, $conf, 
$vmid, $opt, $drive); 
} 
} 

 -Original Message- 
 From: Alexandre DERUMIER [mailto:aderum...@odiso.com] 
 Sent: Freitag, 29. August 2014 09:25 
 To: Dietmar Maurer 
 Cc: pve-devel@pve.proxmox.com 
 Subject: Re: [pve-devel] [PATCH] don't try to hotplug disk if a disk already 
 exist. 
 
 This does not display any errors if $old_volid is set? 
 I think we should raise an error to indicate that something went wrong? 
 
 
 
 
 Maybe 
 
 elsif (!$old_volid) { # hotplug new disks 
 die error hotplug $opt if !PVE::QemuServer::vm_deviceplug($storecfg, 
 $conf, $vmid, $opt, $drive); 
 
 }elseif ($old_voldid  $old_voldid ne $new_volid { 
 raise an error ? 
 } 
 
 
 ? 
 
 - Mail original - 
 
 De: Dietmar Maurer diet...@proxmox.com 
 À: Alexandre Derumier aderum...@odiso.com, pve- 
 de...@pve.proxmox.com 
 Envoyé: Vendredi 29 Août 2014 08:29:00 
 Objet: RE: [pve-devel] [PATCH] don't try to hotplug disk if a disk already 
 exist. 
 
  - } else { # hotplug new disks 
  - 
  + } elsif (!$old_volid) { # hotplug new disks 
  die error hotplug $opt if !PVE::QemuServer::vm_deviceplug($storecfg, 
  $conf, $vmid, $opt, $drive); 
  } 
 
 This does not display any errors if $old_volid is set? 
 I think we should raise an error to indicate that something went wrong? 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] vmstate snapshot : do no write machine to running config

2014-08-29 Thread Alexandre DERUMIER
What do you thing about this (looks more obvious to me)? 

Works fine here !
Thanks.

- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Vendredi 29 Août 2014 10:21:32 
Objet: RE: [pve-devel] [PATCH] vmstate snapshot : do no write machine to 
running config 

What do you thing about this (looks more obvious to me)? 

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm 
index 327ea35..b4358b0 100644 
--- a/PVE/QemuServer.pm 
+++ b/PVE/QemuServer.pm 
@@ -4741,6 +4741,8 @@ my $snapshot_commit = sub { 
die missing snapshot lock\n 
if !($conf-{lock}  $conf-{lock} eq 'snapshot'); 

+ my $has_machine_config = defined($conf-{machine}); 
+ 
my $snap = $conf-{snapshots}-{$snapname}; 

die snapshot '$snapname' does not exist\n if !defined($snap); 
@@ -4753,6 +4755,8 @@ my $snapshot_commit = sub { 

my $newconf = $snapshot_apply_config($conf, $snap); 

+ delete $newconf-{machine} if !$has_machine_config; 
+ 
$newconf-{parent} = $snapname; 

update_config_nolock($vmid, $newconf, 1); 

 For example, in snapshot_rollback(),$forcemachine is wrong because you 
 do not copy machine config with snapshot_apply_config()  
 
 I just send a patch, 
 
 we just need to take $forcemachine from snapshot machine value, and not 
 current config. 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] don't try to hotplug disk if a disk already exist.

2014-08-29 Thread Alexandre DERUMIER
We need also to check if the vm is running and hotplug is enabled

(because without hotplug, we are allow to replace a disk with another, 
 the code manage currently the swap, putting the first disk as unused)

something like:
elsif (PVE::QemuServer::check_running($vmid)  $conf-{hotplug}) { # 
hotplug new disks
if($old_volid){
die you need to remove current disk before hotplug it if 
$old_volid ne $volid;
}else{
die error hotplug $opt if 
!PVE::QemuServer::vm_deviceplug($storecfg, $conf, $vmid, $opt, $drive);
}
}



also, I found another bug, we update the conf with new_volid  before trying 
hotplug.
If hotplug fail, we had the wrong disk updated in config

I'll check the whole vmconfig_update_disk sub, to see to improve that



- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Vendredi 29 Août 2014 11:26:00 
Objet: RE: [pve-devel] [PATCH] don't try to hotplug disk if a disk already 
exist. 

 what about this: 
  
  } else { # hotplug new disks 
 + die some useful error mesage if $old_volid; 
  die error hotplug $opt if !PVE::QemuServer::vm_deviceplug($storecfg, 
 $conf, $vmid, $opt, $drive); 
  } 
 } 
 
 The problem is that we are in $vmconfig_update_disk(), 

die some useful error mesage if $old_volid  $old_volid ne $new_volid; 

 
 so it'll die if we try to update any parameters (disk 
 throttle,discard,backup). 
 
 
 - Mail original - 
 
 De: Dietmar Maurer diet...@proxmox.com 
 À: Alexandre DERUMIER aderum...@odiso.com 
 Cc: pve-devel@pve.proxmox.com 
 Envoyé: Vendredi 29 Août 2014 10:11:18 
 Objet: RE: [pve-devel] [PATCH] don't try to hotplug disk if a disk already 
 exist. 
 
 what about this: 
 
 } else { # hotplug new disks 
 + die some useful error mesage if $old_volid; 
 die error hotplug $opt if !PVE::QemuServer::vm_deviceplug($storecfg, $conf, 
 $vmid, $opt, $drive); } } 
 
  -Original Message- 
  From: Alexandre DERUMIER [mailto:aderum...@odiso.com] 
  Sent: Freitag, 29. August 2014 09:25 
  To: Dietmar Maurer 
  Cc: pve-devel@pve.proxmox.com 
  Subject: Re: [pve-devel] [PATCH] don't try to hotplug disk if a disk 
  already 
 exist. 
  
  This does not display any errors if $old_volid is set? 
  I think we should raise an error to indicate that something went wrong? 
  
  
  
  
  Maybe 
  
  elsif (!$old_volid) { # hotplug new disks die error hotplug $opt if 
  !PVE::QemuServer::vm_deviceplug($storecfg, 
  $conf, $vmid, $opt, $drive); 
  
  }elseif ($old_voldid  $old_voldid ne $new_volid { raise an error ? 
  } 
  
  
  ? 
  
  - Mail original - 
  
  De: Dietmar Maurer diet...@proxmox.com 
  À: Alexandre Derumier aderum...@odiso.com, pve- 
  de...@pve.proxmox.com 
  Envoyé: Vendredi 29 Août 2014 08:29:00 
  Objet: RE: [pve-devel] [PATCH] don't try to hotplug disk if a disk already 
  exist. 
  
   - } else { # hotplug new disks 
   - 
   + } elsif (!$old_volid) { # hotplug new disks 
   die error hotplug $opt if 
   !PVE::QemuServer::vm_deviceplug($storecfg, 
   $conf, $vmid, $opt, $drive); 
   } 
  
  This does not display any errors if $old_volid is set? 
  I think we should raise an error to indicate that something went wrong? 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] don't try to hotplug disk if a disk already exist.

2014-08-29 Thread Alexandre DERUMIER
 if($old_volid){
die you need to remove current disk before hotplug it if 
 $old_volid ne $volid;

also, I think $old_voldid is put as unused before in update_disk

 if (!PVE::QemuServer::drive_is_cdrom($old_drive) 
($drive-{file} ne $old_drive-{file})) {  # delete old disks

$vmconfig_delete_option($rpcenv, $authuser, $conf, $storecfg, 
$vmid, $opt, $force);
$conf = PVE::QemuServer::load_config($vmid); # update/reload
}

as far I remember, it's working with hot-unplug too.

I'll do tests.



- Mail original - 

De: Alexandre DERUMIER aderum...@odiso.com 
À: Dietmar Maurer diet...@proxmox.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Vendredi 29 Août 2014 12:22:49 
Objet: Re: [pve-devel] [PATCH] don't try to hotplug disk if a disk already 
exist. 

We need also to check if the vm is running and hotplug is enabled 

(because without hotplug, we are allow to replace a disk with another, 
the code manage currently the swap, putting the first disk as unused) 

something like: 
elsif (PVE::QemuServer::check_running($vmid)  $conf-{hotplug}) { # hotplug 
new disks 
if($old_volid){ 
die you need to remove current disk before hotplug it if $old_volid ne 
$volid; 
}else{ 
die error hotplug $opt if !PVE::QemuServer::vm_deviceplug($storecfg, $conf, 
$vmid, $opt, $drive); 
} 
} 



also, I found another bug, we update the conf with new_volid before trying 
hotplug. 
If hotplug fail, we had the wrong disk updated in config 

I'll check the whole vmconfig_update_disk sub, to see to improve that 



- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Vendredi 29 Août 2014 11:26:00 
Objet: RE: [pve-devel] [PATCH] don't try to hotplug disk if a disk already 
exist. 

 what about this: 
  
  } else { # hotplug new disks 
 + die some useful error mesage if $old_volid; 
  die error hotplug $opt if !PVE::QemuServer::vm_deviceplug($storecfg, 
 $conf, $vmid, $opt, $drive); 
  } 
 } 
 
 The problem is that we are in $vmconfig_update_disk(), 

die some useful error mesage if $old_volid  $old_volid ne $new_volid; 

 
 so it'll die if we try to update any parameters (disk 
 throttle,discard,backup). 
 
 
 - Mail original - 
 
 De: Dietmar Maurer diet...@proxmox.com 
 À: Alexandre DERUMIER aderum...@odiso.com 
 Cc: pve-devel@pve.proxmox.com 
 Envoyé: Vendredi 29 Août 2014 10:11:18 
 Objet: RE: [pve-devel] [PATCH] don't try to hotplug disk if a disk already 
 exist. 
 
 what about this: 
 
 } else { # hotplug new disks 
 + die some useful error mesage if $old_volid; 
 die error hotplug $opt if !PVE::QemuServer::vm_deviceplug($storecfg, $conf, 
 $vmid, $opt, $drive); } } 
 
  -Original Message- 
  From: Alexandre DERUMIER [mailto:aderum...@odiso.com] 
  Sent: Freitag, 29. August 2014 09:25 
  To: Dietmar Maurer 
  Cc: pve-devel@pve.proxmox.com 
  Subject: Re: [pve-devel] [PATCH] don't try to hotplug disk if a disk 
  already 
 exist. 
  
  This does not display any errors if $old_volid is set? 
  I think we should raise an error to indicate that something went wrong? 
  
  
  
  
  Maybe 
  
  elsif (!$old_volid) { # hotplug new disks die error hotplug $opt if 
  !PVE::QemuServer::vm_deviceplug($storecfg, 
  $conf, $vmid, $opt, $drive); 
  
  }elseif ($old_voldid  $old_voldid ne $new_volid { raise an error ? 
  } 
  
  
  ? 
  
  - Mail original - 
  
  De: Dietmar Maurer diet...@proxmox.com 
  À: Alexandre Derumier aderum...@odiso.com, pve- 
  de...@pve.proxmox.com 
  Envoyé: Vendredi 29 Août 2014 08:29:00 
  Objet: RE: [pve-devel] [PATCH] don't try to hotplug disk if a disk already 
  exist. 
  
   - } else { # hotplug new disks 
   - 
   + } elsif (!$old_volid) { # hotplug new disks 
   die error hotplug $opt if 
   !PVE::QemuServer::vm_deviceplug($storecfg, 
   $conf, $vmid, $opt, $drive); 
   } 
  
  This does not display any errors if $old_volid is set? 
  I think we should raise an error to indicate that something went wrong? 
___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] qemu-server : vm_devices_list : also list block devices

2014-08-29 Thread Alexandre Derumier
Ok, this one fix hotpluging of disks (scsi),
and allow to edit options too.

So, I think no need to change Qemu.pm update_disk

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] vm_devices_list : also list block devices

2014-08-29 Thread Alexandre Derumier
This allow scsi disk to be plug|unplug

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/QemuServer.pm |9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index b4358b0..2058131 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2876,7 +2876,14 @@ sub vm_devices_list {
 foreach my $pcibus (@$res) {
foreach my $device (@{$pcibus-{devices}}) {
next if !$device-{'qdev_id'};
-   $devices-{$device-{'qdev_id'}} = $device;
+   $devices-{$device-{'qdev_id'}} = 1;
+   }
+}
+
+my $resblock = vm_mon_cmd($vmid, 'query-block');
+foreach my $block (@$resblock) {
+   if($block-{device} =~ m/^drive-(\S+)/){
+   $devices-{$1} = 1;
}
 }
 
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] Fwd: successfull migration but failed resume

2014-08-29 Thread Alexandre DERUMIER
Forwading to pve-devel mailing

- Mail transféré - 

De: Alexandre DERUMIER aderum...@odiso.com 
À: Christian Tari christ...@zaark.com 
Envoyé: Vendredi 29 Août 2014 15:08:49 
Objet: Re: [pve-devel] successfull migration but failed resume 

Can it lead issues if we migrate between two different arch? BTW the prior is 
HP dL360G8 the latter is HP dl380G7. 

I have same bug with amd opteron 63XX - 61XX, 

I think because of a bug of kvm, with the cpuflags :xsave existing on 63XX 
and not 61XX. 
https://lkml.org/lkml/2014/2/22/58 


It seem to be your case too, with 

E5-2640 0 @ 2.50GHz : xsave 
CPU E5645 @ 2.40GHz : no xsave. 


Does the migration in the reverse way is working ? 


I have a kernel 3.10 patch for this xsave bug, but don't have tested it yet. 
Don't known if you could test it ? 




- Mail original - 

De: Christian Tari christ...@zaark.com 
À: Alexandre DERUMIER aderum...@odiso.com 
Envoyé: Vendredi 29 Août 2014 14:16:59 
Objet: Re: [pve-devel] successfull migration but failed resume 

Yes, the default, kvm64. 
Can it lead issues if we migrate between two different arch? BTW the prior is 
HP dL360G8 the latter is HP dl380G7. 
The strange thing is that it doesn’t happen every time. Especially after a 
failed migration the subsequent migrations always work. It happens often 
instances with relatively higher memory usage (6-18GB). Can it be some timeout 
while the content of the memory is being transferred? 
Aug 29 11:37:42 ERROR: migration finished with problems (duration 00:04:23) 




//Christian 



On 29 Aug 2014, at 14:08, Alexandre DERUMIER  aderum...@odiso.com  wrote: 


and you guest cpu is kvm64? 


- Mail original - 

De: Christian Tari  christ...@zaark.com  
À: Alexandre DERUMIER  aderum...@odiso.com  
Envoyé: Vendredi 29 Août 2014 13:02:15 
Objet: Re: [pve-devel] successfull migration but failed resume 

Source host: 
processor : 11 
vendor_id : GenuineIntel 
cpu family : 6 
model : 45 
model name : Intel(R) Xeon(R) CPU E5-2640 0 @ 2.50GHz 
stepping : 7 
cpu MHz : 2493.793 
cache size : 15360 KB 
physical id : 0 
siblings : 12 
core id : 5 
cpu cores : 6 
apicid : 11 
initial apicid : 11 
fpu : yes 
fpu_exception : yes 
cpuid level : 13 
wp : yes 
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 
clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm 
constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf 
pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid 
dca sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm ida 
arat epb xsaveopt pln pts dts tpr_shadow vnmi flexpriority ept vpid 
bogomips : 4987.58 
clflush size : 64 
cache_alignment : 64 
address sizes : 46 bits physical, 48 bits virtual 
power management: 

# pveversion 
pve-manager/3.2-1/1933730b (running kernel: 2.6.32-27-pve) 

Target host: 
processor : 11 
vendor_id : GenuineIntel 
cpu family : 6 
model : 44 
model name : Intel(R) Xeon(R) CPU E5645 @ 2.40GHz 
stepping : 2 
cpu MHz : 2399.404 
cache size : 12288 KB 
physical id : 1 
siblings : 12 
core id : 9 
cpu cores : 6 
apicid : 50 
initial apicid : 50 
fpu : yes 
fpu_exception : yes 
cpuid level : 11 
wp : yes 
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 
clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm 
constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf 
pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid 
dca sse4_1 sse4_2 popcnt aes lahf_lm ida arat epb dts tpr_shadow vnmi 
flexpriority ept vpid 
bogomips : 4798.17 
clflush size : 64 
cache_alignment : 64 
address sizes : 40 bits physical, 48 bits virtual 
power management: 

# pveversion 
pve-manager/3.2-4/e24a91c1 (running kernel: 2.6.32-29-pve) 

//Christian 


On 29 Aug 2014, at 12:56, Alexandre DERUMIER  aderum...@odiso.com  wrote: 


 

blockquote 

blockquote 
Aug 29 11:37:39 ERROR: VM 711 not running 



 

It's seem that the kvm process has crashed just after the migration, that's why 
resume failed. 

what is the host source and target processors ? 



- Mail original - 

De: Christian Tari  christ...@zaark.com  
À: aderum...@odiso.com 
Envoyé: Vendredi 29 Août 2014 12:31:10 
Objet: [pve-devel] successfull migration but failed resume 

Hi, 

I know its isn’t the proper way to get support, but we are having exactly the 
same issue as described in the mail thread. 
I’ve been browsing the forum for a while now but can’t find any similar case. 

Aug 29 11:37:39 migration speed: 31.50 MB/s - downtime 150 ms 
Aug 29 11:37:39 migration status: completed 
Aug 29 11:37:39 ERROR: VM 711 not running 
Aug 29 11:37:39 ERROR: command '/usr/bin/ssh -o 'BatchMode=yes' root@1.1.1.1 qm 
resume 711 --skiplock' failed: exit code 2 

pve-manager/3.2-4/e24a91c1 (running kernel: 2.6.32-29-pve) 

Is there anything we can do, any hints you can give? 

Thanks, 

Re: [pve-devel] successfull migration but failed resume

2014-08-29 Thread Alexandre DERUMIER
I might be able to do some tests but I have to take this E5-2640 server out 
from this production cluster and create a new test cluster. It takes some 
days until I rearrange things. If that’s fine Im okay.
Does this mean I have to re-install proxmox 3.1 on both cluster nodes?

If you remove node from a cluster, yes, it's better to reinstall it before join 
a new cluster.

(BTW: It's proxmox 3.2 right ? not 3.1 ?)


could be great to test with current 3.10 kernel.



- Mail original - 

De: Christian Tari christ...@zaark.com 
À: Alexandre DERUMIER aderum...@odiso.com 
Envoyé: Vendredi 29 Août 2014 15:19:10 
Objet: Re: [pve-devel] successfull migration but failed resume 

Good. At least we are on track. 

I might be able to do some tests but I have to take this E5-2640 server out 
from this production cluster and create a new test cluster. It takes some days 
until I rearrange things. If that’s fine Im okay. 
Does this mean I have to re-install proxmox 3.1 on both cluster nodes? 

//Christian 


On 29 Aug 2014, at 15:08, Alexandre DERUMIER aderum...@odiso.com wrote: 

 Can it lead issues if we migrate between two different arch? BTW the prior 
 is HP dL360G8 the latter is HP dl380G7. 
 
 I have same bug with amd opteron 63XX - 61XX, 
 
 I think because of a bug of kvm, with the cpuflags :xsave existing on 63XX 
 and not 61XX. 
 https://lkml.org/lkml/2014/2/22/58 
 
 
 It seem to be your case too, with 
 
 E5-2640 0 @ 2.50GHz : xsave 
 CPU E5645 @ 2.40GHz : no xsave. 
 
 
 Does the migration in the reverse way is working ? 
 
 
 I have a kernel 3.10 patch for this xsave bug, but don't have tested it yet. 
 Don't known if you could test it ? 
 
 
 
 
 - Mail original - 
 
 De: Christian Tari christ...@zaark.com 
 À: Alexandre DERUMIER aderum...@odiso.com 
 Envoyé: Vendredi 29 Août 2014 14:16:59 
 Objet: Re: [pve-devel] successfull migration but failed resume 
 
 Yes, the default, kvm64. 
 Can it lead issues if we migrate between two different arch? BTW the prior is 
 HP dL360G8 the latter is HP dl380G7. 
 The strange thing is that it doesn’t happen every time. Especially after a 
 failed migration the subsequent migrations always work. It happens often 
 instances with relatively higher memory usage (6-18GB). Can it be some 
 timeout while the content of the memory is being transferred? 
 Aug 29 11:37:42 ERROR: migration finished with problems (duration 00:04:23) 
 
 
 
 
 //Christian 
 
 
 
 On 29 Aug 2014, at 14:08, Alexandre DERUMIER  aderum...@odiso.com  wrote: 
 
 
 and you guest cpu is kvm64? 
 
 
 - Mail original - 
 
 De: Christian Tari  christ...@zaark.com  
 À: Alexandre DERUMIER  aderum...@odiso.com  
 Envoyé: Vendredi 29 Août 2014 13:02:15 
 Objet: Re: [pve-devel] successfull migration but failed resume 
 
 Source host: 
 processor : 11 
 vendor_id : GenuineIntel 
 cpu family : 6 
 model : 45 
 model name : Intel(R) Xeon(R) CPU E5-2640 0 @ 2.50GHz 
 stepping : 7 
 cpu MHz : 2493.793 
 cache size : 15360 KB 
 physical id : 0 
 siblings : 12 
 core id : 5 
 cpu cores : 6 
 apicid : 11 
 initial apicid : 11 
 fpu : yes 
 fpu_exception : yes 
 cpuid level : 13 
 wp : yes 
 flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat 
 pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb 
 rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc 
 aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 
 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave 
 avx lahf_lm ida arat epb xsaveopt pln pts dts tpr_shadow vnmi flexpriority 
 ept vpid 
 bogomips : 4987.58 
 clflush size : 64 
 cache_alignment : 64 
 address sizes : 46 bits physical, 48 bits virtual 
 power management: 
 
 # pveversion 
 pve-manager/3.2-1/1933730b (running kernel: 2.6.32-27-pve) 
 
 Target host: 
 processor : 11 
 vendor_id : GenuineIntel 
 cpu family : 6 
 model : 44 
 model name : Intel(R) Xeon(R) CPU E5645 @ 2.40GHz 
 stepping : 2 
 cpu MHz : 2399.404 
 cache size : 12288 KB 
 physical id : 1 
 siblings : 12 
 core id : 9 
 cpu cores : 6 
 apicid : 50 
 initial apicid : 50 
 fpu : yes 
 fpu_exception : yes 
 cpuid level : 11 
 wp : yes 
 flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat 
 pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb 
 rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc 
 aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 
 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm ida arat epb dts 
 tpr_shadow vnmi flexpriority ept vpid 
 bogomips : 4798.17 
 clflush size : 64 
 cache_alignment : 64 
 address sizes : 40 bits physical, 48 bits virtual 
 power management: 
 
 # pveversion 
 pve-manager/3.2-4/e24a91c1 (running kernel: 2.6.32-29-pve) 
 
 //Christian 
 
 
 On 29 Aug 2014, at 12:56, Alexandre DERUMIER  aderum...@odiso.com  wrote: 
 
 
  
 
 blockquote 
 
 blockquote 
 Aug 29 11:37:39 ERROR: 

Re: [pve-devel] successfull migration but failed resume

2014-08-29 Thread Alexandre DERUMIER
Note, I just receive some new opteron servers, so I'll do tests next week :)


- Mail original - 

De: Alexandre DERUMIER aderum...@odiso.com 
À: Christian Tari christ...@zaark.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Vendredi 29 Août 2014 16:14:09 
Objet: Re: [pve-devel] successfull migration but failed resume 

I might be able to do some tests but I have to take this E5-2640 server out 
from this production cluster and create a new test cluster. It takes some 
days until I rearrange things. If that’s fine Im okay. 
Does this mean I have to re-install proxmox 3.1 on both cluster nodes? 

If you remove node from a cluster, yes, it's better to reinstall it before join 
a new cluster. 

(BTW: It's proxmox 3.2 right ? not 3.1 ?) 


could be great to test with current 3.10 kernel. 



- Mail original - 

De: Christian Tari christ...@zaark.com 
À: Alexandre DERUMIER aderum...@odiso.com 
Envoyé: Vendredi 29 Août 2014 15:19:10 
Objet: Re: [pve-devel] successfull migration but failed resume 

Good. At least we are on track. 

I might be able to do some tests but I have to take this E5-2640 server out 
from this production cluster and create a new test cluster. It takes some days 
until I rearrange things. If that’s fine Im okay. 
Does this mean I have to re-install proxmox 3.1 on both cluster nodes? 

//Christian 


On 29 Aug 2014, at 15:08, Alexandre DERUMIER aderum...@odiso.com wrote: 

 Can it lead issues if we migrate between two different arch? BTW the prior 
 is HP dL360G8 the latter is HP dl380G7. 
 
 I have same bug with amd opteron 63XX - 61XX, 
 
 I think because of a bug of kvm, with the cpuflags :xsave existing on 63XX 
 and not 61XX. 
 https://lkml.org/lkml/2014/2/22/58 
 
 
 It seem to be your case too, with 
 
 E5-2640 0 @ 2.50GHz : xsave 
 CPU E5645 @ 2.40GHz : no xsave. 
 
 
 Does the migration in the reverse way is working ? 
 
 
 I have a kernel 3.10 patch for this xsave bug, but don't have tested it yet. 
 Don't known if you could test it ? 
 
 
 
 
 - Mail original - 
 
 De: Christian Tari christ...@zaark.com 
 À: Alexandre DERUMIER aderum...@odiso.com 
 Envoyé: Vendredi 29 Août 2014 14:16:59 
 Objet: Re: [pve-devel] successfull migration but failed resume 
 
 Yes, the default, kvm64. 
 Can it lead issues if we migrate between two different arch? BTW the prior is 
 HP dL360G8 the latter is HP dl380G7. 
 The strange thing is that it doesn’t happen every time. Especially after a 
 failed migration the subsequent migrations always work. It happens often 
 instances with relatively higher memory usage (6-18GB). Can it be some 
 timeout while the content of the memory is being transferred? 
 Aug 29 11:37:42 ERROR: migration finished with problems (duration 00:04:23) 
 
 
 
 
 //Christian 
 
 
 
 On 29 Aug 2014, at 14:08, Alexandre DERUMIER  aderum...@odiso.com  wrote: 
 
 
 and you guest cpu is kvm64? 
 
 
 - Mail original - 
 
 De: Christian Tari  christ...@zaark.com  
 À: Alexandre DERUMIER  aderum...@odiso.com  
 Envoyé: Vendredi 29 Août 2014 13:02:15 
 Objet: Re: [pve-devel] successfull migration but failed resume 
 
 Source host: 
 processor : 11 
 vendor_id : GenuineIntel 
 cpu family : 6 
 model : 45 
 model name : Intel(R) Xeon(R) CPU E5-2640 0 @ 2.50GHz 
 stepping : 7 
 cpu MHz : 2493.793 
 cache size : 15360 KB 
 physical id : 0 
 siblings : 12 
 core id : 5 
 cpu cores : 6 
 apicid : 11 
 initial apicid : 11 
 fpu : yes 
 fpu_exception : yes 
 cpuid level : 13 
 wp : yes 
 flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat 
 pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb 
 rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc 
 aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 
 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave 
 avx lahf_lm ida arat epb xsaveopt pln pts dts tpr_shadow vnmi flexpriority 
 ept vpid 
 bogomips : 4987.58 
 clflush size : 64 
 cache_alignment : 64 
 address sizes : 46 bits physical, 48 bits virtual 
 power management: 
 
 # pveversion 
 pve-manager/3.2-1/1933730b (running kernel: 2.6.32-27-pve) 
 
 Target host: 
 processor : 11 
 vendor_id : GenuineIntel 
 cpu family : 6 
 model : 44 
 model name : Intel(R) Xeon(R) CPU E5645 @ 2.40GHz 
 stepping : 2 
 cpu MHz : 2399.404 
 cache size : 12288 KB 
 physical id : 1 
 siblings : 12 
 core id : 9 
 cpu cores : 6 
 apicid : 50 
 initial apicid : 50 
 fpu : yes 
 fpu_exception : yes 
 cpuid level : 11 
 wp : yes 
 flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat 
 pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb 
 rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc 
 aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 
 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm ida arat epb dts 
 tpr_shadow vnmi flexpriority ept vpid 
 bogomips : 4798.17 
 clflush size : 64 
 

Re: [pve-devel] successfull migration but failed resume

2014-08-29 Thread Michael Rasmussen
On Fri, 29 Aug 2014 17:11:08 +0200 (CEST)
Alexandre DERUMIER aderum...@odiso.com wrote:

 Note, I just receive some new opteron servers, so I'll do tests next week :)
 
As mentioned before I had the same problems migrating from Opteron to
Phenom and Athlon II based CPUs.

From which CPU generation has AMD introduced the cpu flag xsave?

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael at rasmussen dot cc
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E
mir at datanom dot net
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C
mir at miras dot org
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917
--
/usr/games/fortune -es says:
The world is full of people who have never, since childhood, met an
open doorway with an open mind.
-- E. B. White


pgpr4DpOOvHlQ.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] successfull migration but failed resume

2014-08-29 Thread Alexandre DERUMIER
From which CPU generation has AMD introduced the cpu flag xsave?

I see it on Opteron 63XX , but not 61XX.

BTW, does it work for you with current 3.10 kernel ? (which don't have the 
xsave patch yet)


- Mail original - 

De: Michael Rasmussen m...@datanom.net 
À: pve-devel@pve.proxmox.com 
Envoyé: Vendredi 29 Août 2014 17:21:26 
Objet: Re: [pve-devel] successfull migration but failed resume 

On Fri, 29 Aug 2014 17:11:08 +0200 (CEST) 
Alexandre DERUMIER aderum...@odiso.com wrote: 

 Note, I just receive some new opteron servers, so I'll do tests next week :) 
 
As mentioned before I had the same problems migrating from Opteron to 
Phenom and Athlon II based CPUs. 

From which CPU generation has AMD introduced the cpu flag xsave? 

-- 
Hilsen/Regards 
Michael Rasmussen 

Get my public GnuPG keys: 
michael at rasmussen dot cc 
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E 
mir at datanom dot net 
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C 
mir at miras dot org 
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917 
-- 
/usr/games/fortune -es says: 
The world is full of people who have never, since childhood, met an 
open doorway with an open mind. 
-- E. B. White 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] successfull migration but failed resume

2014-08-29 Thread Michael Rasmussen
On Fri, 29 Aug 2014 17:23:31 +0200 (CEST)
Alexandre DERUMIER aderum...@odiso.com wrote:

 From which CPU generation has AMD introduced the cpu flag xsave?
 
 I see it on Opteron 63XX , but not 61XX.
 
Just found it here: Family 15h and up.
https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2013-2076

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael at rasmussen dot cc
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E
mir at datanom dot net
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C
mir at miras dot org
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917
--
/usr/games/fortune -es says:
Writing free verse is like playing tennis with the net down.


pgp60Sr8yeDYf.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Better translate to Spanish language

2014-08-29 Thread Cesar Peschiera
Ohhh Ditemar, I didn't know that you thought so. So excuse me if I am 
causing inconvenience, but as you told me that we can work in the easy part 
of the translation, then I assumed that you would not have problems.


By other hand, i know that you are very occupied with important changes in 
PVE, so i only can say you if you want that we work only in the easy part, I 
will be at your command when you have time available.


Awaiting your answer, i say see you soon and I wish you every success with 
your work


Best regards
Cesar

- Original Message - 
From: Dietmar Maurer diet...@proxmox.com

To: Cesar Peschiera br...@click.com.py; pve-devel@pve.proxmox.com
Sent: Friday, August 29, 2014 4:16 AM
Subject: RE: [pve-devel] Better translate to Spanish language


Do you remember that at starting we talk about of working only in the part 
that

will be easy for you?


You constantly send emails to me about that topic, asking about when I will 
finish this work.

This is a big waste of time ...





___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel