Hi Jesse,
thank you very much for your reply, please see inline.
On 07/06/2012 06:11 AM, Jesse Gross wrote:
On Wed, Jul 4, 2012 at 9:27 AM, Konstantin Khorenko<[email protected]> wrote:
Hi,
we'd like to add the Open vSwitch kernel module to OpenVZ kernel,
in order to do that we've backported the appropriate code from mainstream
kernel.
It turned out that the vport-patch functionality could be also very useful
for us,
so we are looking for possibility to include it to our kernel as well.
Unfortunately mainstream does not have vport-patch functionality at the
moment and
porting vport-patch related code from openvswitch.org git and incorporating
it into
the code taken from mainstream appeared to be not a straightforward task.
On the other hand i've found that Open vSwitch FAQ says:
"Work is in progress in adding these features ["patch" in particular] to the
upstream Linux version of the Open vSwitch kernel module."
The FAQ doesn't contain the phrase that you added in brackets so what
you have here is very misleading. The work that this refers to is
tunneling.
Well, i really appreciate your clarification about this, honestly i was under
impression that those paragraphs are not only about tunneling, may be it's
worth to make the sentences a bit more precise so nobody misreads them like me.
So can you please clarify:
* is that info in FAQ still valid and the work for "patch" inclusion in
mainstream is in progress?
* i failed to find any preliminary patches in the web, but if you have any -
can you please share it or just tell me where should i look for it?
There's no work in progress or planned to add patch ports upstream.
I'm not enthusiastic about doing it either as I would like to remove
uses of it rather than add more.
What is your use case?
So, our case is the following:
let's assume we have a Node with physical interfaces ethX and a set of
Containers (CTs)/Virtual Machines (VMs) with virtual interfaces which are
visible on the Node as vethY.
We plan to implement the following scheme:
---[br0 (eth0)]---
---[br1 (eth1)]---
---[brX (ethX)]------[vbrX (vethA, vethB, ...)]
---[vbrZ (vethM, vethN, ...)]
So we want to create a number of bridges - one per physical interface on the
Node,
each bridge contains the corresponding physical interface.
Node administrator might want group CTs/VMs:
a) CTs/VMs with interfaces bridged with a particular physical interface on the
node (there can
be several such groups surely)
b) CTs/VMs with the "host only" access
So we'd like to create separate bridges for all groups from a), as well as for
groups b) and c).
If the administrator decides that CTs/VMs from group G should work in the
bridged mode via
physical interface P, we simply connect brP and vbrG with help of "patch"
functionality.
There are several advantages of this scheme:
1) if the administrator needs to reconfigure CTs/VMs group G to work via
physical
interface P1, all we need is to destroy the original "patch" connection
with brP and create
another one with brP1.
2) adding a new CT/VM to any group (or even stopping/starting an old one) does
not lead to brX
bridges MAC address changes, consequently network connections to any CT/VM
in the group will
not be affected (this problem arises if we use a single bridge for both
physical interface
and virtual ones).
So here is what we plan to implement.
Please let me know in case something is unclear here, and any comment and
suggestions are highly appreciated.
Thank you in advance.
--
Konstantin Khorenko
_______________________________________________
discuss mailing list
[email protected]
http://openvswitch.org/mailman/listinfo/discuss