On Wed, 28 Dec 2016 19:36:58 +0100 (CET)
Alexandre DERUMIER wrote:
> >>pre-up vconfig add eth0 1100 < New requirement
> >>post-down vconfig rem eth0 1100 < New requirement
>
> Note that vconfig is deprecated since years, you should use "ip" instead
>
> #ip link ad
On Wed, 28 Dec 2016 19:36:58 +0100 (CET)
Alexandre DERUMIER wrote:
>
> Note that vconfig is deprecated since years, you should use "ip" instead
>
> #ip link add link eth0 name eth0.1100 type vlan id 1100
>
Yes, this is how it goes when on a rolling release - you are stocked
with deprecated pac
>>pre-up vconfig add eth0 1100 < New requirement
>>post-down vconfig rem eth0 1100 < New requirement
Note that vconfig is deprecated since years, you should use "ip" instead
#ip link add link eth0 name eth0.1100 type vlan id 1100
- Mail original -
De: "data
On Wed, 28 Dec 2016 17:21:44 +0100 (CET)
Dietmar Maurer wrote:
> I am not sure what you talk about here, but we removed the vlan
> package dependency years ago...
>
> We use our own vlan implementation in:
>
> https://git.proxmox.com/?p=pve-manager.git;a=blob;f=vlan;h=abe646ada79053f7056922653b
I am not sure what you talk about here, but we removed the vlan
package dependency years ago...
We use our own vlan implementation in:
https://git.proxmox.com/?p=pve-manager.git;a=blob;f=vlan;h=abe646ada79053f7056922653b21ddc79260277c;hb=HEAD
Or do I miss something?
The corresponding commit is
On Wed, 28 Dec 2016 14:56:24 +0100
Michael Rasmussen wrote:
> pre-up vconfig add eth0 1100 < New requirement
> post-down vconfig rem eth0 1100 < New requirement
>
Sorry, copy paste error. Should be:
pre-up vconfig add eth0 1100 < New requirement
post-down vconfig
Hi all,
As of bridge-utils 1.5-11 integration with vlan package has been
removed so to have vlan aware bridges working
with /etc/network/interfaces again something like the following is
needed:
# Create the bridge
auto vmbr1100
iface vmbr1100 inet static
address 10.0.4.60
netmask
> I have the same question: Is there a known issue related
> to CONFIG_RT_GROUP_SCHED?
We do not use this setting, so I don't know..
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Hello Dietmar,
I would like to build a custom kernel to enable CONFIG_RT_GROUP_SCHED.
My goal is actually to have maximum 2 containers and 3-5 VMs per physical
machine.
I have the same question: Is there a known issue related to
CONFIG_RT_GROUP_SCHED?
I'm really new to ProxMox, so any tips to eas
I just split out the perl API client into a separate package:
https://git.proxmox.com/?p=pve-apiclient.git;a=summary
and removed the API2Client class from pve-manager package:
https://git.proxmox.com/?p=pve-manager.git;a=commitdiff;h=11be8d6e47d08bf82ca8c5e2daa12ddb8f8ba976
sorry, I have just updated the repo this minute - please try again:
> W: Failed to fetch
> http://download.proxmox.com/debian/dists/jessie/pve-no-subscription/binary-amd64/Packages
> Hash Sum mismatch
___
pve-devel mailing list
pve-devel@pve.proxmox.co
Hi,
W: Failed to fetch
http://download.proxmox.com/debian/dists/jessie/pve-no-subscription/binary-amd64/Packages
Hash Sum mismatch
Stefan
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
we use it to stop remote nbd server
Signed-off-by: Alexandre Derumier
---
PVE/CLI/qm.pm | 24
PVE/QemuServer.pm | 6 ++
2 files changed, 30 insertions(+)
diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
index c13ff4c..4789ee1 100755
--- a/PVE/CLI/qm.pm
+++ b/PVE/CLI/
This is a workaround for nbd infinite timeout connect
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 50 ++
1 file changed, 46 insertions(+), 4 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index a2acf0c..5361bb4 10064
changelog:
- add socat unix tunnel for timeout connect workaround
- forbid disk migration if snapshot exist
Please review the socat code, I think it's clean, but I'm not expert with perl
fork.
___
pve-devel mailing list
pve-devel@pve.proxmox.co
This will create a new drive for each local drive found,
and start the vm with this new drives.
if targetstorage == 1, we use same sid than original vm disk
a nbd server is started in qemu and expose local volumes to network port
Signed-off-by: Alexandre Derumier
---
PVE/API2/Qemu.pm | 14 +++
This allow to migrate disks on local storage to a remote node storage.
When the target node start, a new volumes are created and exposed through qemu
embedded nbd server.
qemu drive-mirror is launch on source vm for each disk with nbd server as
target.
when drive-mirror reach 100% of 1 disk,
we can use multiple drive_mirror in parralel.
block-job-complete can be skipped, if we want to add more mirror job later.
also add support for nbd uri to qemu_drive_mirror
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 171 +++---
1 fi
Signed-off-by: Alexandre Derumier
---
PVE/API2/Qemu.pm | 12 +++-
1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index d0070a6..d21f3df 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -2443,21 +2443,29 @@ __PACKAGE__->register_met
like query-block-jobs.
qmp socket can be busy when block job are running
Signed-off-by: Alexandre Derumier
---
PVE/QMPClient.pm | 2 ++
1 file changed, 2 insertions(+)
diff --git a/PVE/QMPClient.pm b/PVE/QMPClient.pm
index 991a8f4..9e32533 100755
--- a/PVE/QMPClient.pm
+++ b/PVE/QMPClient.pm
@
Oh, we set cpu.cfs_quota_us and cpu.cfs_period_us.
But please note that 'ls' command always returns size 0 on those files.
The kernel configuration CONFIG_RT_GROUP_SCHED is not set in our kernel.
Not sure if we should enable that?
___
pve-devel mailin
21 matches
Mail list logo