3) do we really to define a special storage for hosting qcow2 ? maybe always
store it in local storage and rsync it on live migration.
(everybody don't have a nfs shared storage)
or,do you plan to use any storage instead qcow2 ?
(we need snapshot feature on this cloud-init drive, if user
Hi all,
I just updated dab to work with LXC:
https://git.proxmox.com/?p=dab.git;a=summary
https://git.proxmox.com/?p=dab-pve-appliances.git;a=summary
Also added support for ubuntu 12.04, 14.04 and 15.04 (precise, trusty and
vivid).
- Dietmar
___
However I found we could easily turn it into a CDROM drive again with
this little change:
- my $drive = parse_drive($ds, $storeid:$vmid/vm-$vmid-cloudinit.qcow2);
+ my $drive = parse_drive($ds,
$storeid:$vmid/vm-$vmid-cloudinit.qcow2,media=cdrom);
This is the only change required to make it
also,
I can compress cloud-init qcow2 to around 5K with compression and small
cluster_size
qemu-img convert -c vm-100-cloudinit.qcow2 -f qcow2 -O qcow2 test.qcow2 -o
cluster_size=512b
-rw-r--r-- 1 root root 5120 Jun 30 09:43 test.qcow2
- Mail original -
De: aderumier
Yes, I agree. Some part of the code has to be let out.
Other comments inline:
On 06/29/2015 07:21 PM, Dietmar Maurer wrote:
comments inline
var rows = {};
+var submit_twice = function(enable) {
+
+ var me = this;
+
+var form = me.formPanel.getForm();
also,
I can compress cloud-init qcow2 to around 5K with compression and small
cluster_size
qemu-img convert -c vm-100-cloudinit.qcow2 -f qcow2 -O qcow2 test.qcow2 -o
cluster_size=512b
-rw-r--r-- 1 root root 5120 Jun 30 09:43 test.qcow2
great :-)
1)-does it work with windows ? (as we expose the config drive as drive and
not cdrom)
Well, it shows up, and according to the code it should, but my
experience with administrating windows systems is limited, and my
patience has hit a limit with all that lengthy pointclick work...
You don't
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
PVE/QemuServer.pm | 10 +++---
1 file changed, 3 insertions(+), 7 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 15fb471..f035b67 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -6176,17 +6176,13 @@
Currently when drive-mirror is starting,
the vm and qmp it's hanging on bitmap scanning phase (mainly with raw, nfs and
block raw driver).
This patch do regular pause between each iteration
The initial patch from qemu mailing is working,but pause time is really too
short,
so we still hang qmp
Hi,
theses patches finally fix the bug with drive-mirror, which can hang qemu
when it's scan the source volume to create the bitmap.
It's occuring mainly with raw file (depend of filesystem), nfs and some block
storage like ceph.
Currently when this occur, qemu and qmp are hanging (can take
and I don't known if the lxc root pid is always the first in the tasks list ?
I don't think you can assume that.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
There where some issues in 0.6.4 but they are fixed in 0.6.4!
https://github.com/zfsonlinux/zfs/releases/tag/zfs-0.6.4
On 06/30/2015 02:08 PM, Angel Docampo wrote:
Hi there!
Is it based on zfs send-receive? I thought it was buggy on linux...
perhaps it was on 0.6.3?
Anyway, that's a great
Deduplicated network setup code.
---
src/PVE/LXC.pm | 117 +
1 file changed, 35 insertions(+), 82 deletions(-)
diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index 36c3995..bd2ab08 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -1246,98
Implemented the 'implement me' code and changed v4* and v6*
variable names to be consistent to the way they're named in
qemu-server: eg address and address6 vs v4address and
v6address.
---
src/PVE/LXCSetup/Debian.pm | 35 +--
1 file changed, 21 insertions(+), 14
According to their documentation both of these variables
take an IP[/prefix] notation, the gateway even has an
optional '%iface' suffix. So it should be possible to simply
take over the value content from the configuration directly.
---
src/PVE/LXCSetup/Redhat.pm | 8
1 file changed, 4
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
src/PVE/LXC.pm | 14 ++
1 file changed, 6 insertions(+), 8 deletions(-)
diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index 36c3995..7b7226b 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -1005,9 +1005,11 @@ sub
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
src/PVE/LXC.pm | 3 +++
1 file changed, 3 insertions(+)
diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index 7b7226b..9214dd2 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -1047,6 +1047,9 @@ sub update_lxc_config {
my $value =
to find the lxc pid,
it's possible to find it here
/sys/fs/cgroup/systemd/lxc/$name/tasks
but the tasks have all pid, of all running process running in the container.
and I don't known if the lxc root pid is always the first in the tasks list ?
- Mail original -
De: dietmar
Hi all,
We just released the brand new Proxmox VE ZFS replication manager
(pve-zsync)!
This CLI tool synchronizes your virtual machine (virtual disks and VM
configuration) or directory stored on ZFS between two servers - very
useful for backup and replication tasks.
A big Thank-you to our
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
PVE/QemuServer.pm | 181 +-
control.in| 2 +-
2 files changed, 179 insertions(+), 4 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index ab9ac74..7956c50
This patch add support to create the cloudinit drive on any storage
I introduce an
cloudinitdrive0: local:100/vm-100-cloudinit.qcow2
to store the generate iso reference.
This is to avoid to scan everytime the storage at vm start,
to see if the drive has already been generated.
(and also if we
From: Wolfgang Bumiller w.bumil...@proxmox.com
* Add ipconfigX for all netX configuration options and
using ip=CIDR, gw=IP, ip6=CIDR, gw6=IP as option names
like in LXC.
* Adding explicit ip=dhcp and ip6=dhcp options.
* Removing the config-update code and instead generating
the ide3
This patch add support for generic storage to store the cloudinit image
+ a dedicated sata controller
details are in patch 5/5
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
when we change ip address, network configuration is correctly writen in guest,
but cloud-init don't apply it and keep previous ip address.
workaround with forcing ifdown ifup
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
PVE/QemuServer.pm | 3 +++
1 file changed, 3 insertions(+)
From: Wolfgang Bumiller w.bumil...@proxmox.com
The config-disk is now generated into a qcow2 located on a
configured storage.
It is now also storage-managed and solive-migration and
live-snapshotting should work as they do for regular hard
drives.
Signed-off-by: Alexandre Derumier
Changes since [PATCH]:
- removed all the unnecessary code removed
- direct call of the warning message without using the hook method
- additional function submit_first to pass the enable flag parameter in the
correct way
- added additional condition for the close method
- optimized enable
applied (to master), thanks!
Note: I fixed the path for qemu-kvm/debian/patches/mirror-sleep2.patch
to debian/patches/mirror-sleep2.patch
On 07/01/2015 06:01 AM, Alexandre Derumier wrote:
Currently when drive-mirror is starting,
the vm and qmp it's hanging on bitmap scanning phase (mainly with
Hi Dietmar.
On 06/30/2015 04:41 PM, Dietmar Maurer wrote:
Hi Alen,
first, thanks for the cleanup.
! the patch needs the keepalive feature disabled to work correctly !
OK, but we don't want to do that ;-) Any other suggestions?
Yes, I am still trying to find a better solution, that's why
Thanks, I've just skimmed over it and will read it in detail tomorrow.
Can it now be configured to use any of ide/virtio/sata/...?
Currently I force it on a new sata controller
(because we can't had a second ide controller, and virtio is not supported by
all os.)
but this can be changed if
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
fixme :
-internal-snapshot-async.patch
- backups : seem that they are lot of changes with bitmap support add
(backup_start have new arguments for example)
blockdev.c:2486:7: error: conflicting types for ‘qmp_backup’
char *qmp_backup(const char *backup_file, bool has_format,
^
In file
Thanks, I've just skimmed over it and will read it in detail tomorrow.
Can it now be configured to use any of ide/virtio/sata/...?
On Tue, Jun 30, 2015 at 04:01:45PM +0200, Alexandre Derumier wrote:
This patch add support for generic storage to store the cloudinit image
+ a dedicated sata
Hi Alen,
first, thanks for the cleanup.
! the patch needs the keepalive feature disabled to work correctly !
OK, but we don't want to do that ;-) Any other suggestions?
Signed-off-by: Alen Grizonic a.grizo...@proxmox.com
---
www/manager/grid/FirewallOptions.js | 60
when we change ip address, network configuration is correctly writen in guest,
but cloud-init don't apply it and keep previous ip address.
workaround with forcing ifdown ifup
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
PVE/QemuServer.pm | 3 +++
1 file changed, 3 insertions(+)
Hi Wolfgang,
I begin to test your patches,
seem to works fine here.
I have some questions:
1)-does it work with windows ? (as we expose the config drive as drive and not
cdrom)
2)-as we put it as ide3, it could break some guest disk drive order, if we
don't use disk uuid in /etc/fstab
for
applied, thanks!
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
37 matches
Mail list logo