this replace the default vznetaddbr script,
using perl code.
This allow to use vlan tag, firewall bridge and openvswitch bridge
like for qemu
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
debian/control |2 +-
debian/patches/fix-config-path.diff | 13
theses patch allow to dynamic change bridge value (or hotplug a new network
card) with
vzctl set 130 --netif_add
eth0,1A:D3:A2:25:39:28,veth130.0,5E:30:FB:E1:B9:78,vmbr0v10f --save
bridge value format :
vmbrX(vVLANID)?(f)?
vmbrX : brige
vVLANID: vlan
f: firewall enabled
currently, only ifacenameis passed to vznetcfg script. (and vznetaddbr)
this is ok for vmstart,
but if we want to change veth value online (new bridge, new vlan, new firewall)
or hotplug a new card, with -netif_add
this is not currently possible,
because we read the bridge from vmid.conf in
allow to parse and edit bridge=vmbrX(vY)?(f)?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
parse bridge= vmbrX(vY)?(f)?
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
www/manager/Parser.js | 28 ++--
www/manager/openvz/Network.js | 13 -
2 files changed, 38 insertions(+), 3 deletions(-)
diff --git a/www/manager/Parser.js
changelog:
- vznetaddbr cleanups
- remove dependency on pve-firewall
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
currently, only ifacenameis passed to vznetcfg script. (and vznetaddbr)
this is ok for vmstart,
but if we want to change veth value online (new bridge, new vlan, new firewall)
or hotplug a new card, with -netif_add
this is not currently possible,
because we read the bridge from vmid.conf in
this replace the default vznetaddbr script,
using perl code.
This allow to use vlan tag, firewall bridge and openvswitch bridge
like for qemu
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
debian/patches/fix-config-path.diff | 13 ---
debian/patches/series |2
Hi,
i would like to have a process cluster wide only running one time. I
thought about use flock in /etc/pve/ for doing this. But it seems that
it does not support flock.
Is there any other machanism usable for this task?
Greets,
Stefan
___
Hi Dietmar,
While testing patches send to zfs-plugin I have come across a small bug
related to the iscsi options used with the kvm command to start a vm.
This bug was discovered while testing the patch adding host and target
group to comstar. The bug is that since initiator name is not part of
On Fri, 9 May 2014 02:16:18 +0200
Michael Rasmussen m...@datanom.net wrote:
The following remains to be tested:
commit 5da23bad9844adfb61d3c093d08bf89eef86aadc: nowritecache-comstar
commit 082e79f35b2f7b75862dc3014fb7de8e65fa76c6: sparse
volumes-comstar,iet
Above tested and no errors found.
One question: Can an option be enabled/disabled in the Panel depending
on the value of a text box? If yes, how to implement?
provider = comstar = sparse, nowritecache, target group, and host
group enabled provider = istgt = sparse and nowritecache enabled.
target group, and host group
Is there any other machanism usable for this task?
PVE::Cluster::cfs_lock() ?
(in Cluster.pm)
- Mail original -
De: Stefan Priebe s.pri...@profihost.ag
À: pve-devel@pve.proxmox.com
Envoyé: Dimanche 11 Mai 2014 20:40:30
Objet: [pve-devel] does /etc/pve support flock?
Hi,
i
Hi,
it could be great to add option for compression enable|disable by volume :)
(can be done live)
- Mail original -
De: Dietmar Maurer diet...@proxmox.com
À: Michael Rasmussen m...@datanom.net, pve-devel@pve.proxmox.com
Envoyé: Lundi 12 Mai 2014 05:52:03
Objet: Re: [pve-devel] zfs
Is there any other machanism usable for this task?
PVE::Cluster::cfs_lock() ?
(in Cluster.pm)
But such lock have a timeout, so please do not try to hold them longer that 60
seconds.
PVE::Cluster::cfs_lock() enforces that timeout using an alarm timer.
We always use ACCEPT now, so when traffic from container is accepted, container
input rules
are simply skipped?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
container to container ?
venet0-venet0 ?
Damn, I don't have tested this case.
- Mail original -
De: Dietmar Maurer diet...@proxmox.com
À: Alexandre DERUMIER (aderum...@odiso.com) aderum...@odiso.com,
pve-devel@pve.proxmox.com
Envoyé: Lundi 12 Mai 2014 06:08:59
Objet: venet
container to container ?
venet0-venet0 ?
Yes, we also want to filter container to container traffic.
Damn, I don't have tested this case.
We should really have some regression tests, but I do not know a tool to
simulate
iptables? We can write a simple simulator ourselves, but that is
Yes, we also want to filter container to container traffic.
Previously, we had a rule
-# always allow traffic from containers?
-ruleset_addrule($ruleset, PVEFW-FORWARD, -i venet0 -j RETURN);
so, it wasn't work at all before ?
I see this iptables traffic:
FORWARD: IN=venet0 OUT=venet0
applied, thanks!
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
so, it wasn't work at all before ?
I am quite sure that worked.
I see this iptables traffic:
FORWARD: IN=venet0 OUT=venet0 SRC=10.3.94.204 DST=10.3.94.203 LEN=84
TOS=0x00 PREC=0x00 TTL=64 ID=25368 PROTO=ICMP TYPE=0 CODE=0
ID=1751 SEQ=1
Maybe with some magic routing rule, is it possible
Just use RETURN instead of ACCEPT should solve the problem?
yes, but I'm not sure how to bypass rules for non firewalled vms in this case ?
I need to think a little bit more about this.
- Mail original -
De: Dietmar Maurer diet...@proxmox.com
À: Alexandre DERUMIER
Yes, we also want to filter container to container traffic.
Previously, we had a rule
-# always allow traffic from containers?
-ruleset_addrule($ruleset, PVEFW-FORWARD, -i venet0 -j RETURN);
so, it wasn't work at all before ?
Here is what we produced previously:
PVEFW-FORWARD
Am 12.05.2014 um 06:04 schrieb Dietmar Maurer diet...@proxmox.com:
Is there any other machanism usable for this task?
PVE::Cluster::cfs_lock() ?
(in Cluster.pm)
But such lock have a timeout, so please do not try to hold them longer that
60 seconds.
PVE::Cluster::cfs_lock()
PVE::Cluster::cfs_lock() enforces that timeout using an alarm timer.
Not familiar with the cfs locking. What happens if the process dies which has
set the lock?
timeout (automatic unlock after 120 seconds).
I need something between 6-10min.
Again, you can't do that, so you need to find
-A PVEFW-FORWARD -i venet0 -j RETURN
So that rule is just to accept traffic to non-firewalled containers.
Ok, so I think if we use RETURN (only for venet0-OUT, don't make sense for
tap/veth),
it should work also with this model
But I don't known for group rules (do we need to add mark again
Ok, so I think if we use RETURN (only for venet0-OUT, don't make sense for
tap/veth),
it should work also with this model
But I don't known for group rules (do we need to add mark again everwhere
???)
I think so. Maybe it is best to revert the last 10 commits ...
27 matches
Mail list logo