[pve-devel] [PATCH pve-common] added PVE::Tools::next_free_nbd_dev

2015-08-06 Thread Wolfgang Bumiller
This comes from pve-container and will be used in
qemu-server with cloudinit.
---
 src/PVE/Tools.pm | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/src/PVE/Tools.pm b/src/PVE/Tools.pm
index a7bcd35..9b6a3e4 100644
--- a/src/PVE/Tools.pm
+++ b/src/PVE/Tools.pm
@@ -740,6 +740,16 @@ sub next_spice_port {
 return next_unused_port(61000, 61099, $family);
 }
 
+sub next_free_nbd_dev {
+for(my $i = 0;;$i++) {
+   my $dev = /dev/nbd$i;
+   last if ! -b $dev;
+   next if -f /sys/block/nbd$i/pid; # busy
+   return $dev;
+}
+die unable to find free nbd device\n;
+}
+
 # NOTE: NFS syscall can't be interrupted, so alarm does 
 # not work to provide timeouts.
 # from 'man nfs': Only SIGKILL can interrupt a pending NFS operation
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] coreboot support

2015-08-06 Thread Michael Rasmussen
On Thu, 6 Aug 2015 01:37:34 +0200
Michael Rasmussen m...@datanom.net wrote:

 
 I have attached both configs and my boot.rom for testing (should be
 renamed to .config in the appropriate folders)
 
A set of more refined config files for coreboot and seabios-1.8.2
attached. These work excellent and boots very quick.

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael at rasmussen dot cc
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E
mir at datanom dot net
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C
mir at miras dot org
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917
--
/usr/games/fortune -es says:
Fame is a vapor; popularity an accident; the only earthly certainty is
oblivion.
-- Mark Twain


seabios-1.8.2.config
Description: Binary data


coreboot-4.1.config
Description: Binary data


pgpHTLBnvYtoj.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [RFC PATCH v9 4/6] cloudinit: use qcow2 for future snapshot support

2015-08-06 Thread Wolfgang Bumiller
The config-disk is now generated into a qcow2 located on a
configured storage.
It is now also storage-managed and so live-migration and
live-snapshotting should work as they do for regular hard
drives.
Config drives are recognized by their storage name of the
form: VMID/vm-VMID-cloudinit.qcow2
Example:
ahci0: local:112/vm-112-cloudinit.qcow2,media=cdrom

FIXME: This was easier to implement. Ideally the VMID
wouldn't be required at all but this requires some more
changes so I'm leaving this open for comments for now.
---
 PVE/QemuServer.pm | 85 +--
 1 file changed, 64 insertions(+), 21 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 0786c49..2e9717c 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -400,13 +400,6 @@ EODESCR
 type = 'string',
 description = Ssh keys for root,
 },
-cloudinit = {
-   optional = 1,
-   type = 'boolean',
-   description = Enable cloudinit config generation.,
-   default = 0,
-},
-
 };
 
 # what about other qemu settings ?
@@ -2070,11 +2063,6 @@ sub write_vm_config {
delete $conf-{cdrom};
 }
 
-if ($conf-{cloudinit}) {
-   die option cloudinit conflicts with ide3\n if $conf-{ide3};
-   delete $conf-{cloudinit};
-}
-
 # we do not use 'smp' any longer
 if ($conf-{sockets}) {
delete $conf-{smp};
@@ -4909,6 +4897,7 @@ sub print_pci_addr {
'net29' = { bus = 1, addr = 24 },
'net30' = { bus = 1, addr = 25 },
'net31' = { bus = 1, addr = 26 },
+   'ahci1' = { bus = 1, addr = 27 },
'virtio6' = { bus = 2, addr = 1 },
'virtio7' = { bus = 2, addr = 2 },
'virtio8' = { bus = 2, addr = 3 },
@@ -6431,10 +6420,70 @@ sub scsihw_infos {
 return ($maxdev, $controller, $controller_prefix);
 }
 
+# FIXME: Reasonable size? qcow2 shouldn't grow if the space isn't used anyway?
+my $cloudinit_iso_size = 5; # in MB
+
+sub prepare_cloudinit_disk {
+my ($vmid, $storeid) = @_;
+
+my $storecfg = PVE::Storage::config();
+my $imagedir = PVE::Storage::get_image_dir($storecfg, $storeid, $vmid);
+my $iso_name = vm-$vmid-cloudinit.qcow2;
+my $iso_path = $imagedir/$iso_name;
+if (!-e $iso_path) {
+   # vdisk_alloc size is in K
+   PVE::Storage::vdisk_alloc($storecfg, $storeid, $vmid, 'qcow2', 
$iso_name, $cloudinit_iso_size*1024);
+}
+return ($iso_path, 'qcow2');
+}
+
+# FIXME: also in LXCCreate.pm = move to pve-common
+sub next_free_nbd_dev {
+
+for(my $i = 0;;$i++) {
+   my $dev = /dev/nbd$i;
+   last if ! -b $dev;
+   next if -f /sys/block/nbd$i/pid; # busy
+   return $dev;
+}
+die unable to find free nbd device\n;
+}
+
+sub commit_cloudinit_disk {
+my ($file_path, $iso_path, $format) = @_;
+
+my $nbd_dev = next_free_nbd_dev();
+run_command(['qemu-nbd', '-c', $nbd_dev, $iso_path, '-f', $format]);
+
+eval {
+   run_command(['genisoimage',
+'-R',
+'-V', 'config-2',
+'-o', $nbd_dev,
+$file_path]);
+};
+my $err = $@;
+eval { run_command(['qemu-nbd', '-d', $nbd_dev]); };
+warn $@ if $@;
+die $err if $err;
+}
+
+sub find_cloudinit_storage {
+my ($conf, $vmid) = @_;
+foreach my $ds (keys %$conf) {
+   next if !valid_drivename($ds);
+   if ($conf-{$ds} =~ 
m@^(?:volume=)?([^:]+):\Q$vmid\E/vm-\Q$vmid\E-cloudinit\.qcow2@) {
+   return $1;
+   }
+}
+return undef;
+}
+
 sub generate_cloudinitconfig {
 my ($conf, $vmid) = @_;
 
-return if !$conf-{cloudinit};
+my $storeid = find_cloudinit_storage($conf, $vmid);
+return if !$storeid;
 
 my $path = /tmp/cloudinit/$vmid;
 
@@ -6448,14 +6497,8 @@ sub generate_cloudinitconfig {
. generate_cloudinit_network($conf, $path);
 generate_cloudinit_metadata($conf, $path, $digest_data);
 
-my $cmd = [];
-push @$cmd, 'genisoimage';
-push @$cmd, '-R';
-push @$cmd, '-V', 'config-2';
-push @$cmd, '-o', $path/configdrive.iso;
-push @$cmd, $path/drive;
-
-run_command($cmd);
+my ($iso_path, $format) = prepare_cloudinit_disk($vmid, $storeid);
+commit_cloudinit_disk($path/drive, $iso_path, $format);
 rmtree($path/drive);
 }
 
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [RFC PATCH v9 5/6] cloudinit : support any storeid for configdrive

2015-08-06 Thread Wolfgang Bumiller
From: Alexandre Derumier aderum...@odiso.com

- changelog:
  - support any storage and not only qcow2

  - cloudinit drive volume no more generated at start.

we can now enable|disable cloudinit with

qm set vmid -(ide|sata)x  storeid:cloudinit
qm set vmid -delete (ide|sata)x

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/API2/Qemu.pm  | 26 +-
 PVE/QemuServer.pm | 82 ---
 2 files changed, 49 insertions(+), 59 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index fae2872..a19ad87 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -64,7 +64,9 @@ my $check_storage_access = sub {
 
my $volid = $drive-{file};
 
-   if (!$volid || $volid eq 'none') {
+   if (!$volid || ($volid eq 'none' || $volid eq 'cloudinit')) {
+   # nothing to check
+   } elsif ($volid =~ m/^(([^:\s]+):)?(cloudinit)$/) {
# nothing to check
} elsif ($isCDROM  ($volid eq 'cdrom')) {
$rpcenv-check($authuser, /, ['Sys.Console']);
@@ -131,6 +133,28 @@ my $create_disks = sub {
if (!$volid || $volid eq 'none' || $volid eq 'cdrom') {
delete $disk-{size};
$res-{$ds} = PVE::QemuServer::print_drive($vmid, $disk);
+   } elsif ($volid =~ m/^(?:([^:\s]+):)?cloudinit$/) {
+   my $storeid = $1 || $default_storage;
+   die no storage ID specified (and no default storage)\n if 
!$storeid;
+   my $scfg = PVE::Storage::storage_config($storecfg, $storeid);
+   my $name = vm-$vmid-cloudinit;
+   my $fmt = undef;
+   if ($scfg-{path}) {
+   $name .= .qcow2;
+   $fmt = 'qcow2';
+   }else{
+   $fmt = 'raw';
+   }
+   # FIXME: Reasonable size? qcow2 shouldn't grow if the space isn't 
used anyway?
+   my $cloudinit_iso_size = 5; # in MB
+   my $volid = PVE::Storage::vdisk_alloc($storecfg, $storeid, $vmid, 
+ $fmt, $name, 
$cloudinit_iso_size*1024);
+   $disk-{file} = $volid;
+   $disk-{media} = 'cdrom';
+   push @$vollist, $volid;
+   delete $disk-{format}; # no longer needed
+   $res-{$ds} = PVE::QemuServer::print_drive($vmid, $disk);
+   
} elsif ($volid =~ m/^(([^:\s]+):)?(\d+(\.\d+)?)$/) {
my ($storeid, $size) = ($2 || $default_storage, $3);
die no storage ID specified (and no default storage)\n if 
!$storeid;
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 2e9717c..7bdb655 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -746,8 +746,6 @@ sub get_iso_path {
return get_cdrom_path();
 } elsif ($cdrom eq 'none') {
return '';
-} elsif ($cdrom eq 'cloudinit') {
-   return /tmp/cloudinit/$vmid/configdrive.iso;
 } elsif ($cdrom =~ m|^/|) {
return $cdrom;
 } else {
@@ -759,7 +757,7 @@ sub get_iso_path {
 sub filename_to_volume_id {
 my ($vmid, $file, $media) = @_;
 
- if (!($file eq 'none' || $file eq 'cdrom' || $file eq 'cloudinit' ||
+ if (!($file eq 'none' || $file eq 'cdrom' ||
  $file =~ m|^/dev/.+| || $file =~ m/^([^:]+):(.+)$/)) {
 
return undef if $file =~ m|/|;
@@ -3208,8 +3206,6 @@ sub config_to_command {
push @$devices, '-device', print_drivedevice_full($storecfg, $conf, 
$vmid, $drive, $bridges);
 });
 
-generate_cloudinit_command($conf, $vmid, $storecfg, $bridges, $devices);
-
 for (my $i = 0; $i  $MAX_NETS; $i++) {
  next if !$conf-{net$i};
  my $d = parse_net($conf-{net$i});
@@ -6420,23 +6416,6 @@ sub scsihw_infos {
 return ($maxdev, $controller, $controller_prefix);
 }
 
-# FIXME: Reasonable size? qcow2 shouldn't grow if the space isn't used anyway?
-my $cloudinit_iso_size = 5; # in MB
-
-sub prepare_cloudinit_disk {
-my ($vmid, $storeid) = @_;
-
-my $storecfg = PVE::Storage::config();
-my $imagedir = PVE::Storage::get_image_dir($storecfg, $storeid, $vmid);
-my $iso_name = vm-$vmid-cloudinit.qcow2;
-my $iso_path = $imagedir/$iso_name;
-if (!-e $iso_path) {
-   # vdisk_alloc size is in K
-   PVE::Storage::vdisk_alloc($storecfg, $storeid, $vmid, 'qcow2', 
$iso_name, $cloudinit_iso_size*1024);
-}
-return ($iso_path, 'qcow2');
-}
-
 # FIXME: also in LXCCreate.pm = move to pve-common
 sub next_free_nbd_dev {
 
@@ -6468,52 +6447,39 @@ sub commit_cloudinit_disk {
 die $err if $err;
 }
 
-sub find_cloudinit_storage {
-my ($conf, $vmid) = @_;
-foreach my $ds (keys %$conf) {
-   next if !valid_drivename($ds);
-   if ($conf-{$ds} =~ 
m@^(?:volume=)?([^:]+):\Q$vmid\E/vm-\Q$vmid\E-cloudinit\.qcow2@) {
-   return $1;
-   }
-}
-return undef;
-}
-
 sub generate_cloudinitconfig {
 my ($conf, $vmid) = @_;
 
-my $storeid = find_cloudinit_storage($conf, $vmid);
-return if !$storeid;
-
-my $path = 

[pve-devel] [RFC PATCH v9 1/6] implement cloudinit v2

2015-08-06 Thread Wolfgang Bumiller
From: Alexandre Derumier aderum...@odiso.com

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/QemuServer.pm | 145 --
 control.in|   2 +-
 2 files changed, 143 insertions(+), 4 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 84fc712..1bf480f 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -18,17 +18,19 @@ use Cwd 'abs_path';
 use IPC::Open3;
 use JSON;
 use Fcntl;
+use UUID;
 use PVE::SafeSyslog;
 use Storable qw(dclone);
 use PVE::Exception qw(raise raise_param_exc);
 use PVE::Storage;
-use PVE::Tools qw(run_command lock_file lock_file_full file_read_firstline 
dir_glob_foreach);
+use PVE::Tools qw(run_command lock_file lock_file_full file_read_firstline 
dir_glob_foreach $IPV6RE $IPV4RE);
 use PVE::JSONSchema qw(get_standard_option);
 use PVE::Cluster qw(cfs_register_file cfs_read_file cfs_write_file 
cfs_lock_file);
 use PVE::INotify;
 use PVE::ProcFSTools;
 use PVE::QMPClient;
 use PVE::RPCEnvironment;
+
 use Time::HiRes qw(gettimeofday);
 
 my $qemu_snap_storage = {rbd = 1, sheepdog = 1};
@@ -384,6 +386,28 @@ EODESCR
maxLength = 256,
optional = 1,
 },
+searchdomain = {
+optional = 1,
+type = 'string',
+description = Sets DNS search domains for a container. Create will 
automatically use the setting from the host if you neither set searchdomain or 
nameserver.,
+},
+nameserver = {
+optional = 1,
+type = 'string',
+description = Sets DNS server IP address for a container. Create 
will automatically use the setting from the host if you neither set 
searchdomain or nameserver.,
+},
+sshkey = {
+optional = 1,
+type = 'string',
+description = Ssh keys for root,
+},
+cloudinit = {
+   optional = 1,
+   type = 'boolean',
+   description = Enable cloudinit config generation.,
+   default = 0,
+},
+
 };
 
 # what about other qemu settings ?
@@ -712,6 +736,8 @@ sub get_iso_path {
return get_cdrom_path();
 } elsif ($cdrom eq 'none') {
return '';
+} elsif ($cdrom eq 'cloudinit') {
+   return /tmp/cloudinit/$vmid/configdrive.iso;
 } elsif ($cdrom =~ m|^/|) {
return $cdrom;
 } else {
@@ -723,7 +749,7 @@ sub get_iso_path {
 sub filename_to_volume_id {
 my ($vmid, $file, $media) = @_;
 
-if (!($file eq 'none' || $file eq 'cdrom' ||
+ if (!($file eq 'none' || $file eq 'cdrom' || $file eq 'cloudinit' ||
  $file =~ m|^/dev/.+| || $file =~ m/^([^:]+):(.+)$/)) {
 
return undef if $file =~ m|/|;
@@ -1356,6 +1382,11 @@ sub parse_net {
$res-{firewall} = $1;
} elsif ($kvp =~ m/^link_down=([01])$/) {
$res-{link_down} = $1;
+   } elsif ($kvp =~ m/^cidr=($IPV6RE|$IPV4RE)\/(\d+)$/) {
+   $res-{address} = $1;
+   $res-{netmask} = $2;
+   } elsif ($kvp =~ m/^gateway=($IPV6RE|$IPV4RE)$/) {
+   $res-{gateway} = $1;
} else {
return undef;
}
@@ -4212,12 +4243,14 @@ sub vm_start {
check_lock($conf) if !$skiplock;
 
die VM $vmid already running\n if check_running($vmid, undef, 
$migratedfrom);
-
+   
if (!$statefile  scalar(keys %{$conf-{pending}})) {
vmconfig_apply_pending($vmid, $conf, $storecfg);
$conf = load_config($vmid); # update/reload
}
 
+   generate_cloudinitconfig($conf, $vmid);
+
my $defaults = load_defaults();
 
# set environment variable useful inside network script
@@ -6316,4 +6349,110 @@ sub scsihw_infos {
 return ($maxdev, $controller, $controller_prefix);
 }
 
+sub generate_cloudinitconfig {
+my ($conf, $vmid) = @_;
+
+return if !$conf-{cloudinit};
+
+my $path = /tmp/cloudinit/$vmid;
+
+mkdir /tmp/cloudinit;
+mkdir $path;
+mkdir $path/drive;
+mkdir $path/drive/openstack;
+mkdir $path/drive/openstack/latest;
+mkdir $path/drive/openstack/content;
+generate_cloudinit_userdata($conf, $path);
+generate_cloudinit_metadata($conf, $path);
+generate_cloudinit_network($conf, $path);
+
+my $cmd = [];
+push @$cmd, 'genisoimage';
+push @$cmd, '-R';
+push @$cmd, '-V', 'config-2';
+push @$cmd, '-o', $path/configdrive.iso;
+push @$cmd, $path/drive;
+
+run_command($cmd);
+rmtree($path/drive);
+my $drive = PVE::QemuServer::parse_drive('ide3', 'cloudinit,media=cdrom');
+$conf-{'ide3'} = PVE::QemuServer::print_drive($vmid, $drive);
+update_config_nolock($vmid, $conf, 1);
+
+}
+
+sub generate_cloudinit_userdata {
+my ($conf, $path) = @_;
+
+my $content = #cloud-config\n;
+my $hostname = $conf-{searchdomain} ? 
$conf-{name}...$conf-{searchdomain} : $conf-{name};
+$content .= fqdn: $hostname\n;
+$content .= manage_etc_hosts: true\n;
+
+if ($conf-{sshkey}) {
+   $content .= users:\n;
+   $content .=   - default\n;
+   $content .=   

[pve-devel] [RFC PATCH v9 2/6] cloud-init changes

2015-08-06 Thread Wolfgang Bumiller
 * Add ipconfigX for all netX configuration options and
   using ip=CIDR, gw=IP, ip6=CIDR, gw6=IP as option names
   like in LXC.
 * Adding explicit ip=dhcp and ip6=dhcp options.
 * Removing the config-update code and instead generating
   the ide3 commandline in config_to_command.
   - Adding a conflict check to write_vm_config similar to
   the one for 'cdrom'.
 * Replacing UUID generation with a SHA1 hash of the
   concatenated userdata and network configuration. For this
   generate_cloudinit_userdata/network now returns the
   content variable.
 * Finishing ipv6 support in generate_cloudinit_network.
   Note that ipv4 now only defaults to dhcp if no ipv6
   address was specified. (Explicitly requested  dhcp is
   always used.)
---
 PVE/QemuServer.pm | 160 --
 1 file changed, 130 insertions(+), 30 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 1bf480f..0dc5ac3 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -18,7 +18,6 @@ use Cwd 'abs_path';
 use IPC::Open3;
 use JSON;
 use Fcntl;
-use UUID;
 use PVE::SafeSyslog;
 use Storable qw(dclone);
 use PVE::Exception qw(raise raise_param_exc);
@@ -490,8 +489,26 @@ EODESCR
 };
 PVE::JSONSchema::register_standard_option(pve-qm-net, $netdesc);
 
+my $ipconfigdesc = {
+optional = 1,
+type = 'string', format = 'pve-qm-ipconfig',
+typetext = 
[ip=IPv4_CIDR[,gw=IPv4_GATEWAY]][,ip6=IPv6_CIDR[,gw6=IPv6_GATEWAY]],
+description = 'EODESCR',
+Specify IP addresses and gateways for the corresponding interface.
+
+IP addresses use CIDR notation, gateways are optional but need an IP of the 
same type specified.
+
+The special string 'dhcp' can be used for IP addresses to use DHCP, in which 
case no explicit gateway should be provided.
+For IPv6 the special string 'auto' can be used to use stateless 
autoconfiguration.
+
+If cloud-init is enabled and neither an IPv4 nor an IPv6 address is specified, 
it defaults to using dhcp on IPv4.
+EODESCR
+};
+PVE::JSONSchema::register_standard_option(pve-qm-ipconfig, $netdesc);
+
 for (my $i = 0; $i  $MAX_NETS; $i++)  {
 $confdesc-{net$i} = $netdesc;
+$confdesc-{ipconfig$i} = $ipconfigdesc;
 }
 
 my $drivename_hash;
@@ -1382,18 +1399,65 @@ sub parse_net {
$res-{firewall} = $1;
} elsif ($kvp =~ m/^link_down=([01])$/) {
$res-{link_down} = $1;
-   } elsif ($kvp =~ m/^cidr=($IPV6RE|$IPV4RE)\/(\d+)$/) {
+   } else {
+   return undef;
+   }
+
+}
+
+return undef if !$res-{model};
+
+return $res;
+}
+
+# ipconfigX ip=cidr,gw=ip,ip6=cidr,gw6=ip
+sub parse_ipconfig {
+my ($data) = @_;
+
+my $res = {};
+
+foreach my $kvp (split(/,/, $data)) {
+   if ($kvp =~ m/^ip=dhcp$/) {
+   $res-{address} = 'dhcp';
+   } elsif ($kvp =~ m/^ip=($IPV4RE)\/(\d+)$/) {
$res-{address} = $1;
$res-{netmask} = $2;
-   } elsif ($kvp =~ m/^gateway=($IPV6RE|$IPV4RE)$/) {
+   } elsif ($kvp =~ m/^gw=($IPV4RE)$/) {
$res-{gateway} = $1;
+   } elsif ($kvp =~ m/^ip6=dhcp6?$/) {
+   $res-{address6} = 'dhcp';
+   } elsif ($kvp =~ m/^ip6=auto$/) {
+   $res-{address6} = 'auto';
+   } elsif ($kvp =~ m/^ip6=($IPV6RE)\/(\d+)$/) {
+   $res-{address6} = $1;
+   $res-{netmask6} = $2;
+   } elsif ($kvp =~ m/^gw6=($IPV6RE)$/) {
+   $res-{gateway6} = $1;
} else {
return undef;
}
+}
 
+if ($res-{gateway}  !$res-{address}) {
+   warn 'gateway specified without specifying an IP address';
+   return undef;
+}
+if ($res-{gateway6}  !$res-{address6}) {
+   warn 'IPv6 gateway specified without specifying an IPv6 address';
+   return undef;
+}
+if ($res-{gateway}  $res-{address} eq 'dhcp') {
+   warn 'gateway specified together with DHCP';
+   return undef;
+}
+if ($res-{gateway6}  $res-{address6} eq 'dhcp') {
+   warn 'IPv6 gateway specified together with DHCP6';
+   return undef;
 }
 
-return undef if !$res-{model};
+if (!$res-{address}  !$res-{address6}) {
+   return { address = 'dhcp' };
+}
 
 return $res;
 }
@@ -1614,6 +1678,17 @@ sub verify_net {
 die unable to parse network options\n;
 }
 
+PVE::JSONSchema::register_format('pve-qm-ipconfig', \verify_ipconfig);
+sub verify_ipconfig {
+my ($value, $noerr) = @_;
+
+return $value if parse_ipconfig($value);
+
+return undef if $noerr;
+
+die unable to parse ipconfig options\n;
+}
+
 PVE::JSONSchema::register_format('pve-qm-drive', \verify_drive);
 sub verify_drive {
 my ($value, $noerr) = @_;
@@ -1995,6 +2070,11 @@ sub write_vm_config {
delete $conf-{cdrom};
 }
 
+if ($conf-{cloudinit}) {
+   die option cloudinit conflicts with ide3\n if $conf-{ide3};
+   delete $conf-{cloudinit};
+}
+
 # we do not use 'smp' any longer
 if ($conf-{sockets}) {
delete $conf-{smp};
@@ -3140,6 

[pve-devel] [RFC PATCH v9 6/6] delete cloudinit images as if they weren't cdroms

2015-08-06 Thread Wolfgang Bumiller
---
 PVE/API2/Qemu.pm  |  2 +-
 PVE/QemuServer.pm | 14 +-
 2 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index a19ad87..4dcbb26 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -41,7 +41,7 @@ my $resolve_cdrom_alias = sub {
 my $test_deallocate_drive = sub {
 my ($storecfg, $vmid, $key, $drive, $force) = @_;
 
-if (!PVE::QemuServer::drive_is_cdrom($drive)) {
+if (!PVE::QemuServer::drive_is_cdrom($drive, 1)) {
my $volid = $drive-{file};
if ( PVE::QemuServer::vm_is_volid_owner($storecfg, $vmid, $volid)) {
if ($force || $key =~ m/^unused/) {
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 7bdb655..1cb6b3c 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -36,6 +36,8 @@ my $qemu_snap_storage = {rbd = 1, sheepdog = 1};
 
 my $cpuinfo = PVE::ProcFSTools::read_cpuinfo();
 
+my $QEMU_FORMAT_RE = qr/raw|cow|qcow|qcow2|qed|vmdk|cloop/;
+
 # Note about locking: we use flock on the config file protect
 # against concurent actions.
 # Aditionaly, we have a 'lock' setting in the config file. This
@@ -955,7 +957,7 @@ sub parse_drive {
 return undef if $res-{secs}  $res-{secs} !~ m/^\d+$/;
 return undef if $res-{media}  $res-{media} !~ m/^(disk|cdrom)$/;
 return undef if $res-{trans}  $res-{trans} !~ m/^(none|lba|auto)$/;
-return undef if $res-{format}  $res-{format} !~ 
m/^(raw|cow|qcow|qed|qcow2|vmdk|cloop)$/;
+return undef if $res-{format}  $res-{format} !~ m/^($QEMU_FORMAT_RE)$/;
 return undef if $res-{rerror}  $res-{rerror} !~ 
m/^(ignore|report|stop)$/;
 return undef if $res-{werror}  $res-{werror} !~ 
m/^(enospc|ignore|report|stop)$/;
 return undef if $res-{backup}  $res-{backup} !~ m/^(yes|no)$/;
@@ -1295,7 +1297,9 @@ sub print_netdev_full {
 }
 
 sub drive_is_cdrom {
-my ($drive) = @_;
+my ($drive, $exclude_cloudinit) = @_;
+
+return 0 if $exclude_cloudinit  $drive-{file} =~ 
m@[:/]vm-\d+-cloudinit(?:\.$QEMU_FORMAT_RE)?$@;
 
 return $drive  $drive-{media}  ($drive-{media} eq 'cdrom');
 
@@ -1544,7 +1548,7 @@ sub vmconfig_undelete_pending_option {
 sub vmconfig_register_unused_drive {
 my ($storecfg, $vmid, $conf, $drive) = @_;
 
-if (!drive_is_cdrom($drive)) {
+if (!drive_is_cdrom($drive, 1)) {
my $volid = $drive-{file};
if (vm_is_volid_owner($storecfg, $vmid, $volid)) {
add_unused_volume($conf, $volid, $vmid);
@@ -1910,7 +1914,7 @@ sub destroy_vm {
 foreach_drive($conf, sub {
my ($ds, $drive) = @_;
 
-   return if drive_is_cdrom($drive);
+   return if drive_is_cdrom($drive, 1);
 
my $volid = $drive-{file};
 
@@ -6204,7 +6208,7 @@ sub qemu_img_convert {
 sub qemu_img_format {
 my ($scfg, $volname) = @_;
 
-if ($scfg-{path}  $volname =~ 
m/\.(raw|cow|qcow|qcow2|qed|vmdk|cloop)$/) {
+if ($scfg-{path}  $volname =~ m/\.($QEMU_FORMAT_RE)$/) {
return $1;
 } else {
return raw;
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [RFC PATCH v9 0/6] cloudinit updates

2015-08-06 Thread Wolfgang Bumiller
Sending the entire series again since I made some changes in between.

(@Alexandre, since I modified most patches in the series to bring
them more up to date (eg. see the changelog for patch 1): if you have
already worked on conflicting changes and don't feel like dealing
with the merge conflict, just send them over and I'll go through it)

* patch 1
 - removed $ip_reverse_mask as it's now available through PVE::Network
* patch 2
 - updated use of $ip_reverse_mask here, too
 - included SLAAC like with containers by setting ip6=auto
 - some style fixes
 - fixed bad conditions for warnings
* patch 4:
 - still has a FIXME pending:
   (the next_free_nbd_dev patch for pve-common is already on the list)
* patch 5
 - style fix: `if( $scfg...` = `if ($scfg...`
* patch 6
 - fixed drive_is_cdrom check to support block storage (tested with LVM)

Alexandre Derumier (3):
  implement cloudinit v2
  cloud-init : force ifdown ifup at boot
  cloudinit : support any storeid for configdrive

Wolfgang Bumiller (3):
  cloud-init changes
  cloudinit: use qcow2 for future snapshot support
  delete cloudinit images as if they weren't cdroms

 PVE/API2/Qemu.pm  |  28 +-
 PVE/QemuServer.pm | 271 --
 control.in|   2 +-
 3 files changed, 290 insertions(+), 11 deletions(-)

-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] coreboot support

2015-08-06 Thread Dietmar Maurer
 I have successfully make a boot rom with coreboot using seabios as
 payload;-) 

So what is the advantage if that still uses seabios?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH_V2] Fix the zfs parse volume

2015-08-06 Thread Wolfgang Link
there where an changing in the internal name so it is necessary to adapt the 
parser regex.
---
 PVE/Storage/ZFSPoolPlugin.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
index d7204b2..dbaa5ca 100644
--- a/PVE/Storage/ZFSPoolPlugin.pm
+++ b/PVE/Storage/ZFSPoolPlugin.pm
@@ -131,7 +131,7 @@ sub zfs_parse_zvol_list {
 sub parse_volname {
 my ($class, $volname) = @_;
 
-if ($volname =~ 
m/^(((base|vm)-(\d+)-\S+)\/)?((base)?(vm|subvol)?-(\d+)-\S+)$/) {
+if ($volname =~ 
m/(((base|vm)-(\d+)-\S+)\/)?((base)?(vm|subvol)?-(\d+)-\S+)$/) {
return ('images', $5, $8, $2, $4, $6);
 }
 
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] use our own format for LXC containers

2015-08-06 Thread Alexandre DERUMIER
Do you still write the config file at lxc format for lxc-start ?
(maybe on each ct start ?)

If yes, I'm not sure, but I think it's possible to pass key=values to lxc-start 
command line,


   -s, --define KEY=VAL
  Assign value VAL to configuration variable KEY. This overrides 
any assignment done in config_file.


- Mail original -
De: dietmar diet...@proxmox.com
À: pve-devel pve-devel@pve.proxmox.com
Envoyé: Jeudi 6 Août 2015 12:20:13
Objet: [pve-devel] use our own format for LXC containers

Hi all, 

we finally use our own format for lxc containers. The patch 
touch most parts of the existing code, so it was quite large (sorry). 
To test, you need to update the following packages from git: 

pve-cluster 
lxc 
pve-container 
pve-manager 

We now store LXC configuration as normal files at: 

/etc/pve/lxc/vmid.conf 

Here is an example: 

--- 
arch: amd64 
cpulimit: 1 
cpuunits: 1024 
hostname: CT105 
memory: 512 
net0: name=eth0,hwaddr=0E:18:24:41:2C:43,bridge=vmbr0 
ostype: debian 
rootfs: subpool:subvol-105-rootfs,size=2 
swap: 512 
 

Please test and report bugs ;-) 

Note: The new format is not compatible with the old one, so you will 
loose existing containers when you update. 

- Dietmar 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH lxc] consistent interface names and live network update

2015-08-06 Thread Wolfgang Bumiller
That was supposed to be [PATCH pve-container]

On Thu, Aug 06, 2015 at 03:46:47PM +0200, Wolfgang Bumiller wrote:
 veth${vmid}.${id} looks like a vlan device, with qemu we use
 tap${vmid}i${id}, so it makes sense to use an 'i' for
 containers, too.
 
 Fixed update_net to work with the new configuration method,
 it still expected the old configuration hash and errored
 when trying to change the network interface configuration
 of a running continer.
 ---
  src/PVE/LXC.pm  | 74 
 +
  src/lxcnetaddbr |  2 +-
  2 files changed, 39 insertions(+), 37 deletions(-)
 
 diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
 index ad751ff..ff24696 100644
 --- a/src/PVE/LXC.pm
 +++ b/src/PVE/LXC.pm
 @@ -873,7 +873,7 @@ sub update_lxc_config {
   my $d = PVE::LXC::parse_lxc_network($conf-{$k});
   $netcount++;
   $raw .= lxc.network.type = veth\n;
 - $raw .= lxc.network.veth.pair = veth$vmid.$ind\n;
 + $raw .= lxc.network.veth.pair = veth${vmid}i${ind}\n;
   $raw .= lxc.network.hwaddr = $d-{hwaddr}\n if defined($d-{hwaddr});
   $raw .= lxc.network.name = $d-{name}\n if defined($d-{name});
   $raw .= lxc.network.mtu = $d-{mtu}\n if defined($d-{mtu});
 @@ -940,7 +940,7 @@ sub update_pct_config {
   delete $conf-{$opt};
   next if !$running;
   my $netid = $1;
 - PVE::Network::veth_delete(veth${vmid}.$netid);
 + PVE::Network::veth_delete(veth${vmid}i$netid);
   } else {
   die implement me
   }
 @@ -1082,49 +1082,57 @@ my $safe_string_ne = sub {
  sub update_net {
  my ($vmid, $conf, $opt, $newnet, $netid, $rootdir) = @_;
  
 -my $veth = $newnet-{'veth.pair'};
 -my $vethpeer = $veth . p;
 +if ($newnet-{type} ne 'veth') {
 + # for when there are physical interfaces
 + die cannot update interface of type $newnet-{type};
 +}
 +
 +my $veth = veth${vmid}i${netid};
  my $eth = $newnet-{name};
  
 -if ($conf-{$opt}) {
 - if ($safe_string_ne($conf-{$opt}-{hwaddr}, $newnet-{hwaddr}) ||
 - $safe_string_ne($conf-{$opt}-{name}, $newnet-{name})) {
 +if (my $oldnetcfg = $conf-{$opt}) {
 + my $oldnet = parse_lxc_network($oldnetcfg);
 +
 + if ($safe_string_ne($oldnet-{hwaddr}, $newnet-{hwaddr}) ||
 + $safe_string_ne($oldnet-{name}, $newnet-{name})) {
  
 -PVE::Network::veth_delete($veth);
 + PVE::Network::veth_delete($veth);
   delete $conf-{$opt};
   PVE::LXC::write_config($vmid, $conf);
  
 - hotplug_net($vmid, $conf, $opt, $newnet);
 + hotplug_net($vmid, $conf, $opt, $newnet, $netid);
  
 - } elsif ($safe_string_ne($conf-{$opt}-{bridge}, $newnet-{bridge}) ||
 -$safe_num_ne($conf-{$opt}-{tag}, $newnet-{tag}) ||
 -$safe_num_ne($conf-{$opt}-{firewall}, 
 $newnet-{firewall})) {
 + } elsif ($safe_string_ne($oldnet-{bridge}, $newnet-{bridge}) ||
 +  $safe_num_ne($oldnet-{tag}, $newnet-{tag}) ||
 +  $safe_num_ne($oldnet-{firewall}, $newnet-{firewall})) {
  
 - if ($conf-{$opt}-{bridge}){
 + if ($oldnet-{bridge}) {
   PVE::Network::tap_unplug($veth);
 - delete $conf-{$opt}-{bridge};
 - delete $conf-{$opt}-{tag};
 - delete $conf-{$opt}-{firewall};
 + foreach (qw(bridge tag firewall)) {
 + delete $oldnet-{$_};
 + }
 + $conf-{$opt} = print_lxc_network($oldnet);
   PVE::LXC::write_config($vmid, $conf);
   }
  
 -PVE::Network::tap_plug($veth, $newnet-{bridge}, 
 $newnet-{tag}, $newnet-{firewall});
 - $conf-{$opt}-{bridge} = $newnet-{bridge} if 
 $newnet-{bridge};
 - $conf-{$opt}-{tag} = $newnet-{tag} if $newnet-{tag};
 - $conf-{$opt}-{firewall} = $newnet-{firewall} if 
 $newnet-{firewall};
 + PVE::Network::tap_plug($veth, $newnet-{bridge}, 
 $newnet-{tag}, $newnet-{firewall});
 + foreach (qw(bridge tag firewall)) {
 + $oldnet-{$_} = $newnet-{$_} if $newnet-{$_};
 + }
 + $conf-{$opt} = print_lxc_network($oldnet);
   PVE::LXC::write_config($vmid, $conf);
   }
  } else {
 - hotplug_net($vmid, $conf, $opt, $newnet);
 + hotplug_net($vmid, $conf, $opt, $newnet, $netid);
  }
  
  update_ipconfig($vmid, $conf, $opt, $eth, $newnet, $rootdir);
  }
  
  sub hotplug_net {
 -my ($vmid, $conf, $opt, $newnet) = @_;
 +my ($vmid, $conf, $opt, $newnet, $netid) = @_;
  
 -my $veth = $newnet-{'veth.pair'};
 +my $veth = veth${vmid}i${netid};
  my $vethpeer = $veth . p;
  my $eth = $newnet-{name};
  
 @@ -1139,18 +1147,11 @@ sub hotplug_net {
  $cmd = ['lxc-attach', '-n', $vmid, '-s', 'NETWORK', '--', '/sbin/ip', 
 'link', 'set', $eth ,'up'  ];
  

Re: [pve-devel] [PATCH lxc] enable seccomp

2015-08-06 Thread Dietmar Maurer

applied, thanks!



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] use our own format for LXC containers

2015-08-06 Thread Dietmar Maurer
 Do you still write the config file at lxc format for lxc-start ?
 (maybe on each ct start ?)

Sure, yes. What problem do you want to solve?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] use our own format for LXC containers

2015-08-06 Thread Alexandre DERUMIER
Sure, yes. What problem do you want to solve? 
no special problem, just remove a file write at each start



BTW, I have read the new code, I don't have tested it yet,

but


if ($scfg-{type} eq 'dir' || $scfg-{type} eq 'nfs') {
+   if ($size  0) {
+   $volid = PVE::Storage::vdisk_alloc($storage_conf, $storage, 
$vmid, 'raw',
+  vm-$vmid-rootfs.raw, 
$size);
+   } else {
+   $volid = PVE::Storage::vdisk_alloc($storage_conf, $storage, 
$vmid, 'subvol',
+  subvol-$vmid-rootfs, 0);
+   }



I think this is wrong for $size == 0 ? (subvol is related to zfs right ?)

- Mail original -
De: dietmar diet...@proxmox.com
À: aderumier aderum...@odiso.com
Cc: pve-devel pve-devel@pve.proxmox.com
Envoyé: Jeudi 6 Août 2015 15:18:52
Objet: Re: [pve-devel] use our own format for LXC containers

 Do you still write the config file at lxc format for lxc-start ? 
 (maybe on each ct start ?) 

Sure, yes. What problem do you want to solve? 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH lxc] enable seccomp

2015-08-06 Thread Wolfgang Bumiller
explicitly enable seccomp and add libseccomp-dev build-time
and libseccomp2 runtime dependencies
---
 debian/control | 4 ++--
 debian/rules   | 3 ++-
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/debian/control b/debian/control
index 585efd4..c3f335e 100644
--- a/debian/control
+++ b/debian/control
@@ -2,13 +2,13 @@ Source: lxc
 Section: admin
 Priority: optional
 Maintainer: Proxmox Support Team supp...@proxmox.com
-Build-Depends: debhelper (= 9), autotools-dev, libapparmor-dev, docbook2x, 
libcap-dev, dh-apparmor, libcgmanager-dev, graphviz, libgnutls28-dev, 
linux-libc-dev, dh-autoreconf
+Build-Depends: debhelper (= 9), autotools-dev, libapparmor-dev, docbook2x, 
libcap-dev, dh-apparmor, libcgmanager-dev, graphviz, libgnutls28-dev, 
linux-libc-dev, dh-autoreconf, libseccomp-dev
 Standards-Version: 3.9.5
 Homepage: https://linuxcontainers.org
 
 Package: lxc-pve
 Architecture: any
-Depends: ${shlibs:Depends}, ${misc:Depends}, libcap2, apparmor, python3, 
bridge-utils, uidmap, libgnutlsxx28, criu (= 1.5.2-1), lxcfs
+Depends: ${shlibs:Depends}, ${misc:Depends}, libcap2, apparmor, python3, 
bridge-utils, uidmap, libgnutlsxx28, criu (= 1.5.2-1), lxcfs, libseccomp2
 Conflicts: lxc
 Replaces: lxc
 Provides: lxc
diff --git a/debian/rules b/debian/rules
index 3027c5c..893ed6d 100755
--- a/debian/rules
+++ b/debian/rules
@@ -23,7 +23,8 @@ override_dh_auto_configure:
--enable-cgmanager \
--disable-python \
--disable-lua \
-   --disable-examples
+   --disable-examples \
+   --enable-seccomp
 
 override_dh_strip:
dh_strip --dbg-package=lxc-pve-dbg
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] lxc container migration not implemented

2015-08-06 Thread moula BADJI

I test the new platform with the latest updates
On glusterfs server: no possibility to migrate contenaires :
TASK ERROR: lxc container migration not implemented.

coming soon ?

Thank's.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] use our own format for LXC containers

2015-08-06 Thread Dietmar Maurer
 In future, I want to implement 'subvol' for normal directories (maps to
 simple subdirectories).
 
 So that we can remove those get_private_dir hacks...

In future, we can now also add additional mount entries like:

rootfs: local:101/subvol-101-disk-1.raw,size=4
mount0: faststore:101/subvol-101-disk-1.raw,size=1,mp=/mnt/faststore
mount1: slowbigstore:101/subvol-101-disk-1.raw,size=1024,mp=/mnt/slowbigstore

I assume that is what you requested recently?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH_V2] Fix the zfs parse volume

2015-08-06 Thread Dietmar Maurer
Again, please can you add an expamle toö show why this is required?

 there where an changing in the internal name so it is necessary to adapt the
 parser regex.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel