Re: [pve-devel] kernel 3.10-9 3.19 don't boot on old dell poweredge 2950

2015-06-17 Thread Alexandre DERUMIER
Ok, I found the problem, my fault ;)

I was the initramfs not generate after kernel install, 

because I have write a temp /proxmox_install_mode file for jessie upgrade,

and forgot to remove it.

And this file skip postinstall in pve-kernel .deb install.

- Mail original -
De: dietmar diet...@proxmox.com
À: aderumier aderum...@odiso.com, pve-devel pve-devel@pve.proxmox.com
Envoyé: Mercredi 17 Juin 2015 18:01:47
Objet: Re: [pve-devel] kernel 3.10-9  3.19 don't boot on old dell poweredge 
2950

 only change between both: 
 update zfs/spl 0.6.4, bump api version to 9-pve 
 update zfs/spl source to 0.6.4 
 
 
 I don't use zfs as filesystem, only ext3 
 
 So, I don't see what can be the problem here ? 

And you do not load the zfs module? 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] RFC: qemu-server : add cloudinit support

2015-06-17 Thread Dietmar Maurer
 Because setting in 'netX' are hot-pluggable, but ip configuration is not.
 What do you think?
 
 I dont remember if we already managed mixing hotplugglaged updates and pending
 values for the same element?
 
 for example, we have already disk throttling which is hotpluggled with
 non-hotplug disk options.
 (but disk throttling is gray-out if disk have pending options).

Yes. The questions is if we an do better ;-)
 
 I don't have objection to have ipconfigX ,

I simply don't see if it will have unexpected side effects. But I think
it is worth a try.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] RFC: qemu-server : add cloudinit support

2015-06-17 Thread Alexandre DERUMIER
I also also not sure if we should store ip config inside netX, or use
a separate property like:

net0: virtio=8A:5E:75:3B:29:33,bridge=vmbr0,firewall=1
ipconfig0: ip=1.2.3.4/20,gw=...

Because setting in 'netX' are hot-pluggable, but ip configuration is not.
What do you think?

I dont remember if we already managed mixing hotplugglaged updates and pending 
values for the same element?

for example, we have already disk throttling which is hotpluggled with 
non-hotplug disk options.
(but disk throttling is gray-out if disk have pending options).

I don't have objection to have ipconfigX ,




- Mail original -
De: dietmar diet...@proxmox.com
À: aderumier aderum...@odiso.com
Cc: Wolfgang Bumiller w.bumil...@proxmox.com, pve-devel 
pve-devel@pve.proxmox.com
Envoyé: Mercredi 17 Juin 2015 13:06:30
Objet: Re: [pve-devel] RFC: qemu-server : add cloudinit support

 I would generate a Digest using all cloud-init related values, and 
 re-gererate 
 the iso at VM start 
 if the digest has changed. 
 
 Ok. Maybe can be generate the cloudinit iso file with vmid-digest.iso ? 
 Like this no need to store the dgest in vmid.conf ? 

exactly 

 Also, Do we need to set theses changed values in pending or not ? 

Not sure about that, but I guess using pending would be more correct. 

I also also not sure if we should store ip config inside netX, or use 
a separate property like: 

net0: virtio=8A:5E:75:3B:29:33,bridge=vmbr0,firewall=1 
ipconfig0: ip=1.2.3.4/20,gw=... 

Because setting in 'netX' are hot-pluggable, but ip configuration is not. 
What do you think? 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] RFC: qemu-server : add cloudinit support

2015-06-17 Thread Alexandre DERUMIER
Ok I'll try to do some tests to today to see what is the best way,

based on last patch from wolfgang.


- Mail original -
De: dietmar diet...@proxmox.com
À: aderumier aderum...@odiso.com
Cc: Wolfgang Bumiller w.bumil...@proxmox.com, pve-devel 
pve-devel@pve.proxmox.com
Envoyé: Jeudi 18 Juin 2015 06:10:50
Objet: Re: [pve-devel] RFC: qemu-server : add cloudinit support

 Because setting in 'netX' are hot-pluggable, but ip configuration is not. 
 What do you think? 
 
 I dont remember if we already managed mixing hotplugglaged updates and 
 pending 
 values for the same element? 
 
 for example, we have already disk throttling which is hotpluggled with 
 non-hotplug disk options. 
 (but disk throttling is gray-out if disk have pending options). 

Yes. The questions is if we an do better ;-) 

 I don't have objection to have ipconfigX , 

I simply don't see if it will have unexpected side effects. But I think 
it is worth a try. 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] Fix qemu create wizard when iothread is set

2015-06-17 Thread Emmanuel Kasper
The variable to use to check the disk type is confid,
as set in line 16 with:
var confid = me.confid || (values.controller + values.deviceid);
---
 www/manager/qemu/HDEdit.js | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/www/manager/qemu/HDEdit.js b/www/manager/qemu/HDEdit.js
index e26cbe4..ffca0b0 100644
--- a/www/manager/qemu/HDEdit.js
+++ b/www/manager/qemu/HDEdit.js
@@ -39,7 +39,7 @@ Ext.define('PVE.qemu.HDInputPanel', {
delete me.drive.discard;
}
 
-   if (values.iothread  me.confid.match(/^virtio\d+$/)) {
+   if (values.iothread  confid.match(/^virtio\d+$/)) {
me.drive.iothread = 'on';
} else {
delete me.drive.iothread;
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] RFC: qemu-server : add cloudinit support

2015-06-17 Thread Alexandre DERUMIER
Hi,

I suppose that would work. But you might want to reinitialize a guest
with old settings if you screwed them up inside the guest. Maybe a
user-configurable ID or name of some sort? (The instance id needs to
change in order for cloud-init to pull the configuration. It does seem
to load some user-data even when the id doesn't change, but not the
configuration and meta-data.)

I would like to known when do you want to generate the cloudinit iso.

If user change settings (ips,hostname,...), do we want to apply them directly 
to configuration ?
or do we want to put them as pending ?  (for ip can be ok, for hostname that 
seem strange if user want to rename his vm).

Advantage to pending, is that we can simply regenerate the iso at vmstart if 
pending changes exist.
and user can also revert with if wrong settings.



- Mail original -
De: Wolfgang Bumiller w.bumil...@proxmox.com
À: dietmar diet...@proxmox.com
Cc: aderumier aderum...@odiso.com, pve-devel pve-devel@pve.proxmox.com
Envoyé: Mardi 16 Juin 2015 07:56:36
Objet: Re: [pve-devel] RFC: qemu-server : add cloudinit support

  -) Instance ID: VMs need a counter which is bumped at boot time when 
  there have been changes since the previous boot (or just bump on every 
  change, that's easier to implement :P). Cloning needs to reset or bump 
  the counter as well. (Reset if the source's counter is  1, or bump to 2 
  if it's still 1 etc.) 
 
 Can't we simply generate a digest including all cloudinit configuration 
 values? 

I suppose that would work. But you might want to reinitialize a guest 
with old settings if you screwed them up inside the guest. Maybe a 
user-configurable ID or name of some sort? (The instance id needs to 
change in order for cloud-init to pull the configuration. It does seem 
to load some user-data even when the id doesn't change, but not the 
configuration and meta-data.) 

On Mon, Jun 15, 2015 at 04:55:57PM +0200, Dietmar Maurer wrote: 
  Here's the current data I gathered: 
  
  -) From the UI perspective, the IP address configuration for cloud-init 
  could be done in the network device's configuration. 
  -) AFAIK there are ebtable patches to do filtering by mac address around 
  which are still pending. 
 
 re-thought this, and I think this is an unrelated feature. But we also want 
 to integrate this in new 4.0 release. 
 
  -) Similar to the MAC firewall rules the host can then activate IP 
  firewall rules for the guest system when the VM is booted with 
  cloud-init support. 
 
 We already have special 'ipfilter' ipsets: 
 
 https://pve.proxmox.com/wiki/Proxmox_VE_Firewall 
 
 So we can just add those IPs to the 'ipfilter' ipset (automatically). 
 
  -) We don't really like the whole rebooting after configuration 
  option. It could be an optional flag, but ideally the user only needs to 
  boot once, and since the IP options are always available in the GUI it 
  also wouldn't be harmful to always include the cloud-init drive. This 
  also improves compatibility to default installations of cloud init 
  (like on ubuntu where it otherwise by default tries to connect to a 
  magic IP.). 
 
 Yes, we need a way to configure some behavioral cloud-init option, 
 for example: 
 
 cloudinit: mode=[never|always|once],template=/etc/pve/cloudinit/test 
 
 
  -) From the CLI and configuration side: cloning would need an option to 
  change the IP by network-interface. 
 
 yes, but this looks challenging to me - it will be difficult to provide 
 a nice GUI for that. 
 
  -) For migration support: if we by default keep the config drives around 
  they need to be stored somewhere on a shared storage. So we need an 
  option to configure which storage the ISOs end up on. Then they'd appear 
  in the template/iso/ directory of the configured storage, which has to be a 
  shared one if you want to be able to migrate. The images are tiny anyawy 
  (more filesystem overhead than actual data when they only contain 
  network configuration.) 
  -) Instance ID: VMs need a counter which is bumped at boot time when 
  there have been changes since the previous boot (or just bump on every 
  change, that's easier to implement :P). Cloning needs to reset or bump 
  the counter as well. (Reset if the source's counter is  1, or bump to 2 
  if it's still 1 etc.) 
 
 Can't we simply generate a digest including all cloudinit configuration 
 values? 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] remove tcmalloc default memory allocator

2015-06-17 Thread Alexandre Derumier
and add a depend on libjemalloc1

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 debian/control| 4 ++--
 debian/patches/series | 1 -
 debian/rules  | 2 +-
 3 files changed, 3 insertions(+), 4 deletions(-)

diff --git a/debian/control b/debian/control
index b1ffa48..f885aaf 100644
--- a/debian/control
+++ b/debian/control
@@ -2,12 +2,12 @@ Source: pve-qemu-kvm
 Section: admin
 Priority: extra
 Maintainer: Proxmox Support Team supp...@proxmox.com
-Build-Depends: debhelper (= 5), autotools-dev, libpci-dev, quilt, texinfo, 
texi2html, libgnutls28-dev, libsdl1.2-dev, check, libaio-dev, uuid-dev, 
librbd-dev (= 0.48), libiscsi-dev (= 1.12.0), libspice-protocol-dev (= 
0.12.5),  pve-libspice-server-dev (= 0.12.5-1), libusbredirparser-dev (= 
0.6-2), glusterfs-common (= 3.5.2-1), libusb-1.0-0-dev (= 1.0.17-1), 
xfslibs-dev, libnuma-dev, libgoogle-perftools-dev
+Build-Depends: debhelper (= 5), autotools-dev, libpci-dev, quilt, texinfo, 
texi2html, libgnutls28-dev, libsdl1.2-dev, check, libaio-dev, uuid-dev, 
librbd-dev (= 0.48), libiscsi-dev (= 1.12.0), libspice-protocol-dev (= 
0.12.5),  pve-libspice-server-dev (= 0.12.5-1), libusbredirparser-dev (= 
0.6-2), glusterfs-common (= 3.5.2-1), libusb-1.0-0-dev (= 1.0.17-1), 
xfslibs-dev, libnuma-dev
 Standards-Version: 3.7.2
 
 Package: pve-qemu-kvm
 Architecture: any
-Depends: iproute, bridge-utils, python, libsdl1.2debian, libaio1, libuuid1, 
ceph-common (= 0.48), libiscsi4 (= 1.12.0), pve-libspice-server1 (= 
0.12.5-1), ${shlibs:Depends}, ${misc:Depends}, libusbredirparser1 (= 0.6-2), 
glusterfs-common (= 3.5.2-1), libusb-1.0-0 (= 1.0.17-1), numactl, 
libgoogle-perftools4
+Depends: iproute, bridge-utils, python, libsdl1.2debian, libaio1, libuuid1, 
ceph-common (= 0.48), libiscsi4 (= 1.12.0), pve-libspice-server1 (= 
0.12.5-1), ${shlibs:Depends}, ${misc:Depends}, libusbredirparser1 (= 0.6-2), 
glusterfs-common (= 3.5.2-1), libusb-1.0-0 (= 1.0.17-1), numactl, libjemalloc1
 Conflicts: qemu, qemu-kvm, kvm, pve-kvm, pve-qemu-kvm-2.6.18
 Replaces: pve-kvm, pve-qemu-kvm-2.6.18
 Description: Full virtualization on x86 hardware
diff --git a/debian/patches/series b/debian/patches/series
index f270864..d2e0ecf 100644
--- a/debian/patches/series
+++ b/debian/patches/series
@@ -30,4 +30,3 @@ glusterfs-daemonize.patch
 gluster-backupserver.patch
 add-qmp-get-link-status.patch
 0001-friendlier-ai_flag-hints-for-ipv6-hosts.patch
-tcmalloc.patch
diff --git a/debian/rules b/debian/rules
index 733b4e0..2bf49eb 100755
--- a/debian/rules
+++ b/debian/rules
@@ -33,7 +33,7 @@ endif
 config.status: configure
dh_testdir
# Add here commands to configure the package.
-   ./configure --with-confsuffix=/kvm --target-list=x86_64-softmmu 
--prefix=/usr --datadir=/usr/share --docdir=/usr/share/doc/pve-qemu-kvm 
--sysconfdir=/etc --disable-xen --enable-vnc-tls --enable-sdl --enable-uuid 
--enable-linux-aio --enable-rbd --enable-libiscsi --disable-smartcard-nss 
--audio-drv-list=alsa --enable-spice --enable-usb-redir --enable-glusterfs 
--enable-libusb --disable-gtk --enable-xfsctl --enable-numa --disable-strip 
--enable-tcmalloc
+   ./configure --with-confsuffix=/kvm --target-list=x86_64-softmmu 
--prefix=/usr --datadir=/usr/share --docdir=/usr/share/doc/pve-qemu-kvm 
--sysconfdir=/etc --disable-xen --enable-vnc-tls --enable-sdl --enable-uuid 
--enable-linux-aio --enable-rbd --enable-libiscsi --disable-smartcard-nss 
--audio-drv-list=alsa --enable-spice --enable-usb-redir --enable-glusterfs 
--enable-libusb --disable-gtk --enable-xfsctl --enable-numa --disable-strip
 
 build: patch build-stamp
 
-- 
2.1.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] qemu-kvm : use jemalloc as default memory allocator

2015-06-17 Thread Alexandre Derumier
Hi,
I have done a lot of benchmark with ceph these last days,

and I have had some performance problems with tcmalloc 
when increasing number of disks  iothread.

The main problem is that tcmalloc use a shared thread cache of 16MB
by default.
with more threads, this cache is shared, and some bad garbage collection
can occur if the cache is too low.

It's possible to increase it with a env var:
TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=256MB


I have done test, with default 16MB and current jessie tcmalloc (2.2.1),
I really bad, with more than 2 disk.
increasing to 256MB, it's helping but still have problem with 16disks.

tcmalloc 2.4 seem to works fine, and more fast, but I think we don't want to 
maintain
a package.

That's why I have tested with jemalloc, and here no performance problem,
and almost same perforamnce than tcmalloc 2.4.


(They are a lot of discussions on ceph mailing list about this, because this 
tcmalloc problem
also occur on osds with a lot of iops)


Here the benchmark results of 1 qemu vm randread 4K iodepth=32



libc6
--

1 disk   29052
2 disks  55878
4 disks  127899
8 disks  240566
15 disks 269976

jemaloc


1 disk   41278
2 disks  75781
4 disks  195351
8 disks  294241
15 disks 298199



tcmalloc 2.2.1 default 16M cache
--  

1 disk   37911
2 disks  67698
4 disks  41076
8 disks  43312
15 disks 37569


tcmalloc 2.2.1 : 256M
--- 

1 disk  33914
2 disks 58839
4 disks 148205
8 disks 213298
15 disks218383


tcmalloc 2.4 :256M cache
-
1 disk   42160
2 disks  83135
4 disks  194591
8 disks  306038
15 disks 302278



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] use jemalloc as memory allocator

2015-06-17 Thread Alexandre Derumier
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/QemuServer.pm | 1 +
 1 file changed, 1 insertion(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index ab9ac74..455c473 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -4153,6 +4153,7 @@ sub vm_start {
 
# set environment variable useful inside network script
$ENV{PVE_MIGRATED_FROM} = $migratedfrom if $migratedfrom;
+   $ENV{LD_PRELOAD} = /usr/lib/x86_64-linux-gnu/libjemalloc.so.1;
 
my ($cmd, $vollist, $spice_port) = config_to_command($storecfg, $vmid, 
$conf, $defaults, $forcemachine);
 
-- 
2.1.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] RFC: qemu-server : add cloudinit support

2015-06-17 Thread Dietmar Maurer
 I would like to known when do you want to generate the cloudinit iso.
 
 If user change settings (ips,hostname,...), do we want to apply them directly
 to configuration ?
 or do we want to put them as pending ?  (for ip can be ok, for hostname that
 seem strange if user want to rename his vm).
 
 Advantage to pending, is that we can simply regenerate the iso at vmstart if
 pending changes exist.
 and user can also revert with if wrong settings.

I would generate a Digest using all cloud-init related values, and re-gererate
the iso at VM start
if the digest has changed.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] PVE:Daemon start/restart with systemd

2015-06-17 Thread Alen Grizonic
Couldn't agree more, so here is the polished version with systemctl 
added also to stop the service.



diff --git a/src/PVE/Daemon.pm b/src/PVE/Daemon.pm
index e051500..fb9a923 100644
--- a/src/PVE/Daemon.pm
+++ b/src/PVE/Daemon.pm
@@ -578,6 +578,16 @@ my $read_pid = sub {
 return $pid;
 };

+# checks if the proces was started by systemd
+my $init_ppid = sub {
+
+if (getppid() == 1) {
+   return 1;
+} else {
+   return 0;
+}
+};
+
 sub running {
 my ($self) = @_;

@@ -654,7 +664,11 @@ sub register_start_command {
code = sub {
my ($param) = @_;

-   $self-start($param-{debug});
+if ($init_ppid()) {
+$self-start($param-{debug});
+} else {
+PVE::Tools::run_command(['systemctl', 'start', 
$self-{name}]);

+}

return undef;
}});
@@ -700,8 +714,12 @@ sub register_restart_command {
code = sub {
my ($param) = @_;

-   $reload_daemon($self, $use_hup);
-
+   if ($init_ppid()) {
+   $reload_daemon($self, $use_hup);
+} else {
+   PVE::Tools::run_command(['systemctl', $use_hup ? 
'reload-or-restart' : 'restart', $self-{name}]);

+   }
+
return undef;
}});
 }
@@ -749,8 +767,12 @@ sub register_stop_command {

code = sub {
my ($param) = @_;
-
-   $self-stop();
+
+if ($init_ppid()) {
+   $self-stop();
+   } else {
+PVE::Tools::run_command(['systemctl', 'stop', 
$self-{name}]);

+}

return undef;
}});


On 06/16/2015 04:48 PM, Dietmar Maurer wrote:

some comments inline


diff --git a/src/PVE/Daemon.pm b/src/PVE/Daemon.pm
index e051500..16e08c9 100644
--- a/src/PVE/Daemon.pm
@@ -578,6 +578,16 @@ my $read_pid = sub {
   return $pid;
   };

+my $init_ppid = sub {
+my $ppid = getppid();
+
+if ($ppid == 1) {
+   return 1;
+} else {
+   return 0;
+}
+};
+
   sub running {
   my ($self) = @_;

@@ -654,7 +664,11 @@ sub register_start_command {
  code = sub {
  my ($param) = @_;

-   $self-start($param-{debug});
+if ($init_ppid()) {
+$self-start($param-{debug});
+} else {
+PVE::Tools::run_command(['systemctl', 'start',
$self-{name}]);
+}

  return undef;
  }});
@@ -666,7 +680,7 @@ my $reload_daemon = sub {
   if ($self-{env_restart_pve_daemon}) {
  $self-start();
   } else {
-   my ($running, $pid) = $self-running();
+   my ($running, $pid) = $self-running();

useless?


  if (!$running) {
  $self-start();
  } else {
@@ -700,8 +714,23 @@ sub register_restart_command {
  code = sub {
  my ($param) = @_;

-   $reload_daemon($self, $use_hup);
-
+   if ($init_ppid()) {
+   $reload_daemon($self, $use_hup);
+} else {
+   my ($running, $pid) = $self-running();
+   if (!$running) {
+   PVE::Tools::run_command(['systemctl', 'start',
$self-{name}]);
+   } else {
+   if ($use_hup) {
+   syslog('info', send HUP to $pid);
+   kill 1, $pid;
+   PVE::Tools::run_command(['systemctl', 'start',
$self-{name}]);

It is already running, so what is the purpose of that 'start'?


+   } else {
+   PVE::Tools::run_command(['systemctl', 'restart',
$self-{name}]);
+   }
+   }
+   }
+

I thought we can simply use the following?

  if ($init_ppid()) {
 $reload_daemon($self, $use_hup);
  } else {
 PVE::Tools::run_command(['systemctl', $use_hup ? 'reload-or-restart' :
'restart', $self-{name}]);
  }
  
We also want to use systemctl to stop the service.



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] RFC: qemu-server : add cloudinit support

2015-06-17 Thread Alexandre DERUMIER
I would generate a Digest using all cloud-init related values, and 
re-gererate 
the iso at VM start 
if the digest has changed. 

Ok. Maybe can be generate the cloudinit iso file with  vmid-digest.iso ?
Like this no need to store the dgest  in vmid.conf ?

Also, Do we need to set theses changed values in pending or not ?


- Mail original -
De: dietmar diet...@proxmox.com
À: Wolfgang Bumiller w.bumil...@proxmox.com, aderumier 
aderum...@odiso.com
Cc: pve-devel pve-devel@pve.proxmox.com
Envoyé: Mercredi 17 Juin 2015 11:40:24
Objet: Re: [pve-devel] RFC: qemu-server : add cloudinit support

 I would like to known when do you want to generate the cloudinit iso. 
 
 If user change settings (ips,hostname,...), do we want to apply them directly 
 to configuration ? 
 or do we want to put them as pending ? (for ip can be ok, for hostname that 
 seem strange if user want to rename his vm). 
 
 Advantage to pending, is that we can simply regenerate the iso at vmstart if 
 pending changes exist. 
 and user can also revert with if wrong settings. 

I would generate a Digest using all cloud-init related values, and re-gererate 
the iso at VM start 
if the digest has changed. 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 1/2] implement cloudinit v2

2015-06-17 Thread Wolfgang Bumiller
From: Alexandre Derumier aderum...@odiso.com

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/QemuServer.pm | 181 +-
 control.in|   2 +-
 2 files changed, 179 insertions(+), 4 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index ab9ac74..7956c50 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -18,17 +18,19 @@ use Cwd 'abs_path';
 use IPC::Open3;
 use JSON;
 use Fcntl;
+use UUID;
 use PVE::SafeSyslog;
 use Storable qw(dclone);
 use PVE::Exception qw(raise raise_param_exc);
 use PVE::Storage;
-use PVE::Tools qw(run_command lock_file lock_file_full file_read_firstline 
dir_glob_foreach);
+use PVE::Tools qw(run_command lock_file lock_file_full file_read_firstline 
dir_glob_foreach $IPV6RE $IPV4RE);
 use PVE::JSONSchema qw(get_standard_option);
 use PVE::Cluster qw(cfs_register_file cfs_read_file cfs_write_file 
cfs_lock_file);
 use PVE::INotify;
 use PVE::ProcFSTools;
 use PVE::QMPClient;
 use PVE::RPCEnvironment;
+
 use Time::HiRes qw(gettimeofday);
 
 my $qemu_snap_storage = {rbd = 1, sheepdog = 1};
@@ -384,6 +386,28 @@ EODESCR
maxLength = 256,
optional = 1,
 },
+searchdomain = {
+optional = 1,
+type = 'string',
+description = Sets DNS search domains for a container. Create will 
automatically use the setting from the host if you neither set searchdomain or 
nameserver.,
+},
+nameserver = {
+optional = 1,
+type = 'string',
+description = Sets DNS server IP address for a container. Create 
will automatically use the setting from the host if you neither set 
searchdomain or nameserver.,
+},
+sshkey = {
+optional = 1,
+type = 'string',
+description = Ssh keys for root,
+},
+cloudinit = {
+   optional = 1,
+   type = 'boolean',
+   description = Enable cloudinit config generation.,
+   default = 0,
+},
+
 };
 
 # what about other qemu settings ?
@@ -712,6 +736,8 @@ sub get_iso_path {
return get_cdrom_path();
 } elsif ($cdrom eq 'none') {
return '';
+} elsif ($cdrom eq 'cloudinit') {
+   return /tmp/cloudinit/$vmid/configdrive.iso;
 } elsif ($cdrom =~ m|^/|) {
return $cdrom;
 } else {
@@ -723,7 +749,7 @@ sub get_iso_path {
 sub filename_to_volume_id {
 my ($vmid, $file, $media) = @_;
 
-if (!($file eq 'none' || $file eq 'cdrom' ||
+ if (!($file eq 'none' || $file eq 'cdrom' || $file eq 'cloudinit' ||
  $file =~ m|^/dev/.+| || $file =~ m/^([^:]+):(.+)$/)) {
 
return undef if $file =~ m|/|;
@@ -1356,6 +1382,11 @@ sub parse_net {
$res-{firewall} = $1;
} elsif ($kvp =~ m/^link_down=([01])$/) {
$res-{link_down} = $1;
+   } elsif ($kvp =~ m/^cidr=($IPV6RE|$IPV4RE)\/(\d+)$/) {
+   $res-{address} = $1;
+   $res-{netmask} = $2;
+   } elsif ($kvp =~ m/^gateway=($IPV6RE|$IPV4RE)$/) {
+   $res-{gateway} = $1;
} else {
return undef;
}
@@ -4143,12 +4174,14 @@ sub vm_start {
check_lock($conf) if !$skiplock;
 
die VM $vmid already running\n if check_running($vmid, undef, 
$migratedfrom);
-
+   
if (!$statefile  scalar(keys %{$conf-{pending}})) {
vmconfig_apply_pending($vmid, $conf, $storecfg);
$conf = load_config($vmid); # update/reload
}
 
+   generate_cloudinitconfig($conf, $vmid);
+
my $defaults = load_defaults();
 
# set environment variable useful inside network script
@@ -6251,4 +6284,146 @@ sub scsihw_infos {
 return ($maxdev, $controller, $controller_prefix);
 }
 
+sub generate_cloudinitconfig {
+my ($conf, $vmid) = @_;
+
+return if !$conf-{cloudinit};
+
+my $path = /tmp/cloudinit/$vmid;
+
+mkdir /tmp/cloudinit;
+mkdir $path;
+mkdir $path/drive;
+mkdir $path/drive/openstack;
+mkdir $path/drive/openstack/latest;
+mkdir $path/drive/openstack/content;
+generate_cloudinit_userdata($conf, $path);
+generate_cloudinit_metadata($conf, $path);
+generate_cloudinit_network($conf, $path);
+
+my $cmd = [];
+push @$cmd, 'genisoimage';
+push @$cmd, '-R';
+push @$cmd, '-V', 'config-2';
+push @$cmd, '-o', $path/configdrive.iso;
+push @$cmd, $path/drive;
+
+run_command($cmd);
+rmtree($path/drive);
+my $drive = PVE::QemuServer::parse_drive('ide3', 'cloudinit,media=cdrom');
+$conf-{'ide3'} = PVE::QemuServer::print_drive($vmid, $drive);
+update_config_nolock($vmid, $conf, 1);
+
+}
+
+sub generate_cloudinit_userdata {
+my ($conf, $path) = @_;
+
+my $content = #cloud-config\n;
+my $hostname = $conf-{searchdomain} ? 
$conf-{name}...$conf-{searchdomain} : $conf-{name};
+$content .= fqdn: $hostname\n;
+$content .= manage_etc_hosts: true\n;
+
+if ($conf-{sshkey}) {
+   $content .= users:\n;
+   $content .=   - default\n;
+   $content .=   

[pve-devel] [PATCH 2/2] cloud-init changes

2015-06-17 Thread Wolfgang Bumiller
 * Changing cidr and gatway setting names to ip, ip6, gw,
   gw6, as these are the names we are using for LXC.
 * Adding explicit ip=dhcp and ip6=dhcp options.
 * Removing the config-update code and instead generating
   the ide3 commandline in config_to_command.
   - Adding a conflict check to write_vm_config similar to
   the one for 'cdrom'.
 * Replacing UUID generation with a SHA1 hash of the
   concatenated userdata and network configuration. For this
   generate_cloudinit_userdata/network now returns the
   content variable.
 * Finishing ipv6 support in generate_cloudinit_network.
   Note that ipv4 now only defaults to dhcp if no ipv6
   address was specified. (Explicitly requested  dhcp is
   always used.)
---
 PVE/QemuServer.pm | 71 +++
 1 file changed, 51 insertions(+), 20 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 7956c50..b9738da 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -18,7 +18,6 @@ use Cwd 'abs_path';
 use IPC::Open3;
 use JSON;
 use Fcntl;
-use UUID;
 use PVE::SafeSyslog;
 use Storable qw(dclone);
 use PVE::Exception qw(raise raise_param_exc);
@@ -1382,11 +1381,20 @@ sub parse_net {
$res-{firewall} = $1;
} elsif ($kvp =~ m/^link_down=([01])$/) {
$res-{link_down} = $1;
-   } elsif ($kvp =~ m/^cidr=($IPV6RE|$IPV4RE)\/(\d+)$/) {
+   } elsif ($kvp =~ m/^ip=dhcp$/) {
+   $res-{address} = 'dhcp';
+   } elsif ($kvp =~ m/^ip=($IPV4RE)\/(\d+)$/) {
$res-{address} = $1;
$res-{netmask} = $2;
-   } elsif ($kvp =~ m/^gateway=($IPV6RE|$IPV4RE)$/) {
+   } elsif ($kvp =~ m/^gw=($IPV4RE)$/) {
$res-{gateway} = $1;
+   } elsif ($kvp =~ m/^ip6=dhcp$/) {
+   $res-{address6} = 'dhcp';
+   } elsif ($kvp =~ m/^ip6=($IPV6RE)\/(\d+)$/) {
+   $res-{address6} = $1;
+   $res-{netmask6} = $2;
+   } elsif ($kvp =~ m/^gw6=($IPV6RE)$/) {
+   $res-{gateway6} = $1;
} else {
return undef;
}
@@ -1995,6 +2003,11 @@ sub write_vm_config {
delete $conf-{cdrom};
 }
 
+if ($conf-{cloudinit}) {
+   die option cloudinit conflicts with ide3\n if $conf-{ide3};
+   delete $conf-{cloudinit};
+}
+
 # we do not use 'smp' any longer
 if ($conf-{sockets}) {
delete $conf-{smp};
@@ -3115,6 +3128,8 @@ sub config_to_command {
push @$devices, '-device', print_drivedevice_full($storecfg, $conf, 
$vmid, $drive, $bridges);
 });
 
+generate_cloudinit_command($conf, $vmid, $storecfg, $bridges, $devices);
+
 for (my $i = 0; $i  $MAX_NETS; $i++) {
  next if !$conf-{net$i};
  my $d = parse_net($conf-{net$i});
@@ -6297,9 +6312,9 @@ sub generate_cloudinitconfig {
 mkdir $path/drive/openstack;
 mkdir $path/drive/openstack/latest;
 mkdir $path/drive/openstack/content;
-generate_cloudinit_userdata($conf, $path);
-generate_cloudinit_metadata($conf, $path);
-generate_cloudinit_network($conf, $path);
+my $digest_data = generate_cloudinit_userdata($conf, $path)
+   . generate_cloudinit_network($conf, $path);
+generate_cloudinit_metadata($conf, $path, $digest_data);
 
 my $cmd = [];
 push @$cmd, 'genisoimage';
@@ -6310,10 +6325,18 @@ sub generate_cloudinitconfig {
 
 run_command($cmd);
 rmtree($path/drive);
-my $drive = PVE::QemuServer::parse_drive('ide3', 'cloudinit,media=cdrom');
-$conf-{'ide3'} = PVE::QemuServer::print_drive($vmid, $drive);
-update_config_nolock($vmid, $conf, 1);
+}
+
+sub generate_cloudinit_command {
+my ($conf, $vmid, $storecfg, $bridges, $devices) = @_;
 
+return if !$conf-{cloudinit};
+
+my $path = /tmp/cloudinit/$vmid/configdrive.iso;
+my $drive = parse_drive('ide3', 'cloudinit,media=cdrom');
+my $drive_cmd = print_drive_full($storecfg, $vmid, $drive);
+push @$devices, '-drive', $drive_cmd;
+push @$devices, '-device', print_drivedevice_full($storecfg, $conf, $vmid, 
$drive, $bridges);
 }
 
 sub generate_cloudinit_userdata {
@@ -6336,15 +6359,13 @@ sub generate_cloudinit_userdata {
 
 my $fn = $path/drive/openstack/latest/user_data;
 file_write($fn, $content);
-
+return $content;
 }
 
 sub generate_cloudinit_metadata {
-my ($conf, $path) = @_;
+my ($conf, $path, $digest_data) = @_;
 
-my ($uuid, $uuid_str);
-UUID::generate($uuid);
-UUID::unparse($uuid, $uuid_str);
+my $uuid_str = Digest::SHA::sha1_hex($digest_data);
 
 my $content = {\n;   
 $content .=  \uuid\: \$uuid_str\,\n;
@@ -6353,9 +6374,7 @@ sub generate_cloudinit_metadata {
 
 my $fn = $path/drive/openstack/latest/meta_data.json;
 
-return file_write($fn, $content);
-
-
+file_write($fn, $content);
 }
 
 my $ipv4_reverse_mask = [
@@ -6406,14 +6425,26 @@ sub generate_cloudinit_network {
$opt =~ s/net/eth/;
 
$content .=auto $opt\n;
-   if ($net-{address}) {
+   

[pve-devel] [PATCH 0/2] cloud-init v3

2015-06-17 Thread Wolfgang Bumiller
I'm attaching a second patch to your cloud-init patch, Alexandre.
Here's the summary of the changes:

 * Changing cidr and gatway setting names to ip, ip6, gw,
   gw6, as these are the names we are using for LXC.
 * Adding explicit ip=dhcp and ip6=dhcp options.
 * Removing the config-update code and instead generating
   the ide3 commandline in config_to_command.
   - Adding a conflict check to write_vm_config similar to
   the one for 'cdrom'.
 * Replacing UUID generation with a SHA1 hash of the
   concatenated userdata and network configuration. For this
   generate_cloudinit_userdata/network now returns the
   content variable.
 * Finishing ipv6 support in generate_cloudinit_network.
   Note that ipv4 now only defaults to dhcp if no ipv6
   address was specified. (Explicitly requested  dhcp is
   always used.)

Both the setting renames and UUID changes were requested by Dietmar.
Having consistent setting names make sense, and an explicit dhcp
option when ipv4 and ipv6 can be configured separately is also sort of
required if you want to for instance use dhcp for ipv4 but a fixed
ipv6 address.
(Also, in the container configuration GUI ipv4 and ipv6 are separate
boxes, so it makes sense to use the same layout here once we update
the GUI.)
As for the ide3 change: Rewriting the configuration with an ide3 line
means that you also have to take care to remove it when diabling
cloud-init, also it means a cloudinit cdrom drive appears in the GUI,
but removing it wouldn't be disabling cloudinit. Catching this kind of
settings change to disable cloudinit would be a possibility, but it
seems like a bit too much magic.
So instead, `cloudinit: 1` is considered incompatible with an `ide3:`
line, and behaves like mixing a `cdrom:` and an `ide2:` line

Alexandre Derumier (1):
  implement cloudinit v2

Wolfgang Bumiller (1):
  cloud-init changes

 PVE/QemuServer.pm | 212 +-
 control.in|   2 +-
 2 files changed, 210 insertions(+), 4 deletions(-)

-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] kernel 3.10-9 3.19 don't boot on old dell poweredge 2950

2015-06-17 Thread Alexandre DERUMIER
Hi,
I'm currently testing new kernel 3.19,
it's working fine on my new dell servers r630.


But I have crash in initramfs on my old test server (dell poweredge 2950, 
megasas driver).
kernel panic , cannot find root / ...


with kernel 3.10, It's booting fine with
pve-kernel-3.10.0-8-pve   3.10.0-33 


But crash (same error) with

pve-kernel-3.10.0-9-pve   3.10.0-34


only change between both:
update zfs/spl 0.6.4, bump api version to 9-pve
update zfs/spl source to 0.6.4


I don't use zfs as filesystem, only ext3

So, I don't see what can be the problem here ?



Any idea ?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] kernel 3.10-9 3.19 don't boot on old dell poweredge 2950

2015-06-17 Thread Alexandre DERUMIER
I just notice that I don't have

/boot/initrd.img-3.10.0-9-pve

and
#update-initramfs -k all -u

update-initramfs: Generating /boot/initrd.img-3.10.0-8-pve
update-initramfs: Generating /boot/initrd.img-3.10.0-7-pve
update-initramfs: Generating /boot/initrd.img-3.10.0-6-pve
...
(not -9-pve)

?

I don't remember how update-initramfs list the kernels ?


- Mail original -
De: datanom.net m...@datanom.net
À: pve-devel pve-devel@pve.proxmox.com
Envoyé: Mercredi 17 Juin 2015 17:34:54
Objet: Re: [pve-devel] kernel 3.10-9  3.19 don't boot on old dell poweredge 
2950

On Wed, 17 Jun 2015 17:28:28 +0200 (CEST) 
Alexandre DERUMIER aderum...@odiso.com wrote: 

 
 I don't use zfs as filesystem, only ext3 
 
 So, I don't see what can be the problem here ? 
 
Maybe some changes to the grub boot loader? 

-- 
Hilsen/Regards 
Michael Rasmussen 

Get my public GnuPG keys: 
michael at rasmussen dot cc 
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E 
mir at datanom dot net 
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C 
mir at miras dot org 
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917 
-- 
/usr/games/fortune -es says: 
Of course I can keep secrets. It's the people I tell them to that 
can't keep them. -Anthony Haden-Guest 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] kernel 3.10-9 3.19 don't boot on old dell poweredge 2950

2015-06-17 Thread Michael Rasmussen
On Wed, 17 Jun 2015 17:28:28 +0200 (CEST)
Alexandre DERUMIER aderum...@odiso.com wrote:

 
 I don't use zfs as filesystem, only ext3
 
 So, I don't see what can be the problem here ?
 
Maybe some changes to the grub boot loader?

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael at rasmussen dot cc
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E
mir at datanom dot net
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C
mir at miras dot org
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917
--
/usr/games/fortune -es says:
  Of course I can keep secrets. It's the people I tell them to that
  can't keep them. -Anthony Haden-Guest


pgpdizySMz1ah.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] kernel 3.10-9 3.19 don't boot on old dell poweredge 2950

2015-06-17 Thread Dietmar Maurer
 only change between both:
 update zfs/spl 0.6.4, bump api version to 9-pve
 update zfs/spl source to 0.6.4
 
 
 I don't use zfs as filesystem, only ext3
 
 So, I don't see what can be the problem here ?

And you do not load the zfs module?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] kernel 3.10-9 3.19 don't boot on old dell poweredge 2950

2015-06-17 Thread Michael Rasmussen
On Wed, 17 Jun 2015 17:40:26 +0200 (CEST)
Alexandre DERUMIER aderum...@odiso.com wrote:

 I just notice that I don't have
 
 /boot/initrd.img-3.10.0-9-pve
 
 and
 #update-initramfs -k all -u
 
 update-initramfs: Generating /boot/initrd.img-3.10.0-8-pve
 update-initramfs: Generating /boot/initrd.img-3.10.0-7-pve
 update-initramfs: Generating /boot/initrd.img-3.10.0-6-pve
 ...
 (not -9-pve)
 
Why not try pve-kernel-3.10.0-10-pve?

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael at rasmussen dot cc
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E
mir at datanom dot net
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C
mir at miras dot org
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917
--
/usr/games/fortune -es says:
You scratch my tape, and I'll scratch yours.


pgpI3FLbnF5Wt.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel