On Thu, Feb 19, 2015 at 02:25:18PM -0800, Gurucharan Shetty wrote:
> On Thu, Feb 19, 2015 at 11:16 AM, Ben Pfaff <[email protected]> wrote:
> > This commit adds preliminary design documentation for Open Virtual Network,
> > or OVN, a new OVS-based project to add support for virtual networking to
> > OVS, initially with OpenStack integration.
> >
> > This initial design has been influenced by many people, including (in
> > alphabetical order) Aaron Rosen, Chris Wright, Jeremy Stribling,
> > Justin Pettit, Ken Duda, Madhu Venugopal, Martin Casado, Pankaj Thakkar,
> > Russell Bryant, and Teemu Koponen. All blunders, however, are due to my
> > own hubris.
> >
> > Signed-off-by: Ben Pfaff <[email protected]>
Thanks for finding typos, I've now fixed those.
> > + <li>
> > + Some CMS systems, including OpenStack, fully start a VM only when its
> > + networking is ready. To support this, <code>ovn-nbd</code> notices
> > the
> > + new row in the <code>Bindings</code> table, and pushes this upward by
> > + updating the <ref column="up" table="Logical_Port" db="OVN_NB"/>
> > column
> > + in the OVN Northbound database's <ref table="Logical_Port"
> > db="OVN_NB"/>
> > + table to indicate that the VIF is now up. The CMS, if it uses this
> > + feature, can then react by allowing the VM's execution to proceed.
> > + </li>
> > +
> > + <li>
> > + On every hypervisor but the one where the VIF resides,
> > + <code>ovn-controller</code> notices the new row in the
> > + <code>Bindings</code> table. This provides
> > <code>ovn-controller</code>
> > + the physical location of the logical port, so each instance updates
> > the
> > + OpenFlow tables of its switch (based on logical datapath flows in
> > the OVN
> > + DB <code>Pipeline</code> table) so that packets to and from the VIF
> > can
> > + be properly handled via tunnels.
> > + </li>
> I wonder how much of a problem this delay in propagation of state to
> other hypervisors will be.
> At least with containers, where startup time is non-existent, it would
> mean that every container application written should be smart enough
> to retry connecting to another application in a different host.
I don't know how much of a problem it will be. Do you (or anyone) have
an idea about that? I kind of feel like proceeding on the assumption
that it will be OK, and then invoking a backup plan if not.
But I'm actually much more concerned about what seems to me a bigger
problem with containers. As I understand it, containers get created and
destroyed (not just booted and shut down) very frequently. Creating and
destroying a VM and its VIFs takes a long round trip through the whole
CMS. I'm worried that's going to be slow.
> > + <column name="name">
> > + The logical port name. The name used here must match those used in
> > the
> > + <ref key="iface-id" table="Interface" column="external_ids"
> > + db="Open_vSwitch"/> in the <ref db="Open_vSwitch"/> database's <ref
> > + table="Interface" db="Open_vSwitch"/> table, because hypervisors use
> > <ref
> > + key="iface-id" table="Interface" column="external_ids"
> > + db="Open_vSwitch"/> as a lookup key for logical ports.
> > + </column>
> The above fits nicely if you want to run containers on hypervisors
> too. Since the document clearly mentions that we only want to start by
> integrating with CMS - openstack for VMs, that is alright. But, if we
> want to run multiple containers inside each tenant VM, and then
> provide individual IP addressability and policies on each of the
> container interface, then we probably should update
> IntegrationGuide.md to include the concept of "sub-interfaces" - where
> each VM VIF can represent multiple container interfaces. Openstack
> currently does not support such a concept, but it may make sense to
> have the infrastructure in OVN for it while designing. This will be
> useful in a world that contains containers, VMs and physical machines
> interconnected with each other.
I'd really appreciate it if you'd propose something to start that off,
then we can workshop it on the list. I don't feel comfortable with
container networking yet.
> > + <table name="ACL" title="Access Control List (ACL) rule">
> > + <p>
> > + Each row in this table represents one ACL rule for the logical
> > switch in
> > + its <ref column="switch"/> column. The <ref column="action"/>
> > column for
> > + the highest-<ref column="priority"/> matching row in this table
> > + determines a packet's treatment. If no row matches, packets are
> > allowed
> > + by default. (Default-deny treatment is possible: add a rule with
> > <ref
> > + column="priority"/> 0, <code>true</code> as <ref column="match"/>,
> > and
> > + <code>deny</code> as <ref column="action"/>.)
> > + </p>
> > +
> > + <column name="switch">
> > + The switch to which the ACL rule applies.
> > + </column>
> > +
> Shouldn't a ACL rule apply to a logical port instead of a logical switch?
It does, or rather, it can match on the logical ingress and egress ports
within a particular logical switch.
I can see how this looks a little confusing. Does writing it this way
help?
<column name="switch">
The switch to which the ACL rule applies. The expression in the
<ref column="match"/> column may match against logical ports
within this switch.
</column>
I've tentatively made that change.
_______________________________________________
dev mailing list
[email protected]
http://openvswitch.org/mailman/listinfo/dev