On Tue, Aug 04, 2015 at 12:24:06PM +0200, Alexandre DERUMIER wrote:
Regarding the error when the image already exists: In Qemu.pm only disks
that were previously marked as unused get deleted
($test_deallocate_drive), also this function ignores cdrom drives.
Maybe a media=cloudinit would be
Hi
I see we still have references to the java applet in Utils.js and
VNConsole.js
Should we keep them or do we want to remove them ?
Emmanuel
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
---
PVE/API2/Qemu.pm | 2 +-
PVE/QemuServer.pm | 14 +-
2 files changed, 10 insertions(+), 6 deletions(-)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 82a8e15..38eefb4 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -41,7 +41,7 @@ my $resolve_cdrom_alias = sub {
To be applied to the v8 series.
I gave drive_is_cdrom a flag for whether to treat cloudinit images as
cdrom drive (which it now does by default), in order to easily be able
to exclude it in places where it makes sense. $test_deallocate_storage
for instance now doesn't treat cloudinit images as
-my $path = $class-path($scfg, $volname);
+die illegal name '$name' - should be 'vm-*-*'\n
+ if $name !~ m/^vm-\d+-/;
And reason for above check? Or did you just copied that code from
alloc_image?
Right. And that's why I changed the name - to keep it the same.
Is
Removed old, for the actual gfs2 version (3.1.8), unnecessary patch.
Do not throw an error when no GFS2 entry is in /etc/fstab, because
there cannot possible be an entry after the first installation and it's
very confusing to get an error at this point. A info message should be
enough. Using
Updating the gfs2-utils to the latest stable upstream version
for PVE 3.4. Targets the stable-3 branch from gfs2-utils
GFS2 sources used:
https://git.fedorahosted.org/cgit/gfs2-utils.git/snapshot/gfs2-utils-3.1.8.tar.gz
Thomas Lamprecht (2):
gfs2-utils: update package control files for updated
Anyways, I tested the patch, and volume_resize() now returns without errors.
The problem is that it does not resize the underlying LVM volume.
Do I need to install any drbdmanage updates/patches to make that work?
I just scanned the drbdmanage sources, and the corresponding
Please see attached a patch to implement resize for the DRBD backend.
I hope it matches all your coding style guidelines;
feedback is welcome, of course.
Please can you send patches inline? That way it is easier to review code
and add comments. I copied the code for this purpose -
I'd like to pick this up again. Have you made any more changes?
No change since the last time. (I would like to work on it again too)
Currently generating the cloudinit image is a one-shot mechanism (and
also currently fails if the image already exists.), it ends up as a
normal (persistent)
Regarding the error when the image already exists: In Qemu.pm only disks
that were previously marked as unused get deleted
($test_deallocate_drive), also this function ignores cdrom drives.
Maybe a media=cloudinit would be useful after all?
On one side it would give us an easy condition to allow
Maybe qemu-2.4 has introduced something which makes it incompatible with
previous versions of qemu?
Yes, maybe.
do you have tried qemu 2.4 on top of 2.4 ?
- Mail original -
De: datanom.net m...@datanom.net
À: pve-devel pve-devel@pve.proxmox.com
Envoyé: Mardi 4 Août 2015 16:33:56
Hi Alexandre,
Hi,
We also use QinQ and have submitted patches for the previous network
implementation that made use of a bridge in bridge design to achieve the
QinQ functionality.
They are also a new way to implement q-in-q with vlan aware bridge
Regarding the error when the image already exists: In Qemu.pm only disks
that were previously marked as unused get deleted
($test_deallocate_drive), also this function ignores cdrom drives.
Maybe a media=cloudinit would be useful after all?
On one side it would give us an easy condition to
Hi Alexandre,
We also use QinQ and have submitted patches for the previous network
implementation that made use of a bridge in bridge design to achieve the
QinQ functionality.
The new vlan aware bridge implementation will be a lot cleaner.
When your patches are ready we will test them and
Another way,
but I'm not sure it's working, is to tag 802.1ad on the physical interface
eth0.10vmbrcustomer--(vlanX)--tapX
auto vmbrcustomer1
iface vmbrcustomer1 inet manual
bridge_vlan_aware yes
bridge_ports eth0.10
bridge_stp off
bridge_fd 0
This seem to work.
(I'm not sure about tcpdump result when vlan are stacked)
auto vmbrcustomer1
iface vmbrcustomer1 inet manual
bridge_vlan_aware yes
bridge_ports customer1lp
bridge_stp off
bridge_fd 0
pre-up ip link add dev customer1l type veth peer name
keep them for now
On August 4, 2015 at 2:20 PM Emmanuel Kasper e.kas...@proxmox.com wrote:
Hi
I see we still have references to the java applet in Utils.js and
VNConsole.js
Should we keep them or do we want to remove them ?
Emmanuel
___
Alexandre,
Am I right in thinking that for 4x bridges each with a different SVID
(101,102,103,104) that the config would look like this ?
auto vmbrcustomer1
iface vmbrcustomer1 inet manual
bridge_vlan_aware yes
bridge_ports customer1lp
bridge_stp off
bridge_fd 0
Am I right in thinking that for 4x bridges each with a different SVID
(101,102,103,104) that the config would look like this ?
Yes, I think it should work.
I'm not just that all pre-up,post-up will be run in correct order, but that's
the idea.
- Mail original -
De: Andrew Thrift
-#my $cmd = ['/sbin/lvextend', '-L', $size, $path];
-#run_command($cmd, errmsg = error resizing volume '$path');
+# FIXME if there's ever more than one volume in a resource
not sure if we ever want to support multiple volumes inside one
resource?
Why would we want to do that?
On 08/04/2015 04:35 PM, Alexandre DERUMIER wrote:
Maybe qemu-2.4 has introduced something which makes it incompatible with
previous versions of qemu?
Yes, maybe.
do you have tried qemu 2.4 on top of 2.4 ?
I tried nested KVM virtualization with qemu 2.4, and it works here(TM)
host
Dear,
Please add the following patch into the 4.1.3 kernel releases for proxmox
4.0.
This is needed to split up / break up some (multi)devices iommu groups.
Using this myself to split up my rocketU USB3.0 cards USB controllers into
different iommu groups so that vfio can passthrough each
On Tue, 04 Aug 2015 18:50:12 +0200
Emmanuel Kasper e.kas...@proxmox.com wrote:
guest (pve v4.0 VM, with cpu type set to host)
grep --quiet vmx /proc/cpuinfo echo found
found
uname -srm
Linux 4.1.3-1-pve x86_64
kvm --version
QEMU emulator version 2.3.93, Copyright (c) 2003-2008
Did you try to remove:
-boot menu=on,strict=on,reboot-timeout=1000
For me it fixes the problem with qemu 2.4.
On Tue, Aug 4, 2015 at 7:14 PM, Michael Rasmussen m...@datanom.net wrote:
But can you start a kvm guest from the guest with kvm enabled?
--
Kamil Trzciński
ayu...@ayufan.eu
On Tue, 4 Aug 2015 21:20:07 +0200
Kamil Trzciński ayu...@ayufan.eu wrote:
Did you try to remove:
-boot menu=on,strict=on,reboot-timeout=1000
For me it fixes the problem with qemu 2.4.
You are right;-) But it is enough to remove menu=on. However, this has
the bad side effect that you are
Begin forwarded message:
Date: Tue, 4 Aug 2015 17:59:57 +0200
From: Michael Rasmussen m...@datanom.net
To: Alexandre DERUMIER aderum...@odiso.com
Subject: Re: [pve-devel] updated PVE 4.0 packages on pvetest
On Tue, 4 Aug 2015 16:35:45 +0200 (CEST)
Alexandre DERUMIER aderum...@odiso.com wrote:
Hi Alexandre,
This looks like it should work.
Something to be aware of, QinQ does not always have an outer tag with
ethertype 0x88a8, it can also have a tag of 0x8100 or 0x9100 depending on
the implementation.
For example:
0x88a8--0x8100Outer-tag (SVID) of 0x88a8, Inner-tag (CVID) of
28 matches
Mail list logo