Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-26 Thread Kevin Benton
That's just what I mean about horizontal, which is limited for some
features. For example, ports belonging to BSN driver and OVS driver can't
communicate with each other in the same tunnel network, neither does
security group across both sides.

There is no tunnel network in this case, just VLAN networks. Security
groups work fine and they can communicate with each other over the network.
The IVS agent wires its ports' security groups and OVS wires its own.
Security group filtering is local to a port. Why didn't you think that
would work?

Those agent notification is handled by other common code in ML2, so thin
MDs can seamlessly be integrated with each other horizontally for all
features, like tunnel l2pop.

That's just the tunnel coordination issue that has already been brought up.
That's orthogonal to whether or not a mechanism driver is 'thin' or 'fat'.
Someone could implement another 'fat' driver that doesn't communicate with
a backend and it could still be incompatible with the OVS driver if it sets
up tunnels in its own way.


To bring this back to the relevant topic. OVN can have an ML2 driver that
calls a backend without having neutron agents (agents != ML2).
Interoperability with other vxlan drivers will be an issue because there
isn't a general solution for that yet. That's still better (from an
interoperability perspective) than being a monolithic plugin that doesn't
allow anything else to run.

On Wed, Feb 25, 2015 at 10:04 PM, loy wolfe loywo...@gmail.com wrote:


 On Thu, Feb 26, 2015 at 10:50 AM, Kevin Benton blak...@gmail.com wrote:

 You can horizontally split as well (if I understand what axis definitions
 you are using). The Big Switch driver for example will bind ports that
 belong to hypervisors running IVS while leaving the OVS driver to bind
 ports attached to hypervisors running OVS.


 That's just what I mean about horizontal, which is limited for some
 features. For example, ports belonging to BSN driver and OVS driver can't
 communicate with each other in the same tunnel network, neither does
 security group across both sides.


  I don't fully understand your comments about  the architecture of
 neutron. Most work is delegated to either agents or a backend server.
 Basically every ML2 driver pushes the work via an agent notification or
 an HTTP call of some sort


 Here is the key difference: thin MD such as ovs and bridge never push any
 work to agent, which only handle port bind, just as a scheduler selecting
 the backend vif type. Those agent notification is handled by other common
 code in ML2, so thin MDs can seamlessly be integrated with each other
 horizontally for all features, like tunnel l2pop. On the other hand fat MD
 will push every work to backend through HTTP call, which partly block
 horizontal inter-operation with other backends.

 Then I'm thing about this pattern: ML2 /w thin MD - agent - HTTP call to
 backend? Which should be much easier for horizontal inter-operate.


 On Feb 25, 2015 6:15 PM, loy wolfe loywo...@gmail.com wrote:

 Oh, what you mean is vertical splitting, while I'm talking about
 horizontal splitting.

 I'm a little confused about why Neutron is designed so differently with
 Nova and Cinder. In fact MD could be very simple, delegating nearly all
 things out to agent. Remember Cinder volume manager? The real storage
 backend could also be deployed outside the server farm as the dedicated
 hardware, not necessary the local host based resource. The agent could act
 as the proxy to an outside module, instead of heavy burden on central
 plugin servers, and also, all backend can inter-operate and co-exist
 seamlessly (like a single vxlan across ovs and tor in hybrid deployment)


 On Thu, Feb 26, 2015 at 9:39 AM, Kevin Benton blak...@gmail.com wrote:

 In the cases I'm referring to, OVS handles the security groups and
 vswitch.  The other drivers handle fabric configuration for VLAN tagging to
 the host and whatever other plumbing they want to do.
 On Feb 25, 2015 5:30 PM, loy wolfe loywo...@gmail.com wrote:



 On Thu, Feb 26, 2015 at 3:51 AM, Kevin Benton blak...@gmail.com
 wrote:

 The fact that a system doesn't use a neutron agent is not a good
 justification for monolithic vs driver. The VLAN drivers co-exist with 
 OVS
 just fine when using VLAN encapsulation even though some are agent-less.

 so how about security group, and all other things which need
 coordination between vswitchs?


  There is a missing way to coordinate connectivity with tunnel
 networks across drivers, but that doesn't mean you can't run multiple
 drivers to handle different types or just to provide additional features
 (auditing,  more access control, etc).
 On Feb 25, 2015 2:04 AM, loy wolfe loywo...@gmail.com wrote:

 +1 to separate monolithic OVN plugin

 The ML2 has been designed for co-existing of multiple heterogeneous
 backends, it works well for all agent solutions: OVS, Linux Bridge, and
 even ofagent.

 However, when things come with all kinds of agentless 

Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-25 Thread Kevin Benton
The fact that a system doesn't use a neutron agent is not a good
justification for monolithic vs driver. The VLAN drivers co-exist with OVS
just fine when using VLAN encapsulation even though some are agent-less.

There is a missing way to coordinate connectivity with tunnel networks
across drivers, but that doesn't mean you can't run multiple drivers to
handle different types or just to provide additional features (auditing,
more access control, etc).
On Feb 25, 2015 2:04 AM, loy wolfe loywo...@gmail.com wrote:

 +1 to separate monolithic OVN plugin

 The ML2 has been designed for co-existing of multiple heterogeneous
 backends, it works well for all agent solutions: OVS, Linux Bridge, and
 even ofagent.

 However, when things come with all kinds of agentless solutions,
 especially all kinds of SDN controller (except for Ryu-Lib style),
 Mechanism Driver became the new monolithic place despite the benefits of
 code reduction:  MDs can't inter-operate neither between themselves nor
 with ovs/bridge agent L2pop, each MD has its own exclusive vxlan
 mapping/broadcasting solution.

 So my suggestion is that keep those thin MD(with agent) in ML2 framework
 (also inter-operate with native Neutron L3/service plugins), while all
 other fat MD(agentless) go with the old style of monolithic plugin, with
 all L2-L7 features tightly integrated.

 On Wed, Feb 25, 2015 at 9:25 AM, Amit Kumar Saha (amisaha) 
 amis...@cisco.com wrote:

  Hi,



 I am new to OpenStack (and am particularly interested in networking). I
 am getting a bit confused by this discussion. Aren’t there already a few
 monolithic plugins (that is what I could understand from reading the
 Networking chapter of the OpenStack Cloud Administrator Guide. Table 7.3
 Available networking plugi-ins)? So how do we have interoperability between
 those (or do we not intend to)?



 BTW, it is funny that the acronym ML can also be used for “monolithic” J



 Regards,

 Amit Saha

 Cisco, Bangalore






 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-25 Thread Kevin Benton
Yeah, it seems ML2 at the least should save you a lot of boilerplate.
On Feb 25, 2015 2:32 AM, Russell Bryant rbry...@redhat.com wrote:

 On 02/24/2015 05:38 PM, Kevin Benton wrote:
  OVN implementing it's own control plane isn't a good reason to make it a
  monolithic plugin. Many of the ML2 drivers are for technologies with
  their own control plane.
 
  Going with the monolithic plugin only makes sense if you are certain
  that you never want interoperability with other technologies at the
  Neutron level. Instead of ruling that out this early, why not make it as
  an ML2 driver and then change to a monolithic plugin if you run into
  some fundamental issue with ML2?

 That was my original thinking.  I figure the important code of the ML2
 driver could be reused if/when the switch is needed.  I'd really just
 take the quicker path to making something work unless it's obvious that
 ML2 isn't the right path.  As this thread is still ongoing, it certainly
 doesn't seem obvious.

 --
 Russell Bryant

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-25 Thread Kevin Benton
You can horizontally split as well (if I understand what axis definitions
you are using). The Big Switch driver for example will bind ports that
belong to hypervisors running IVS while leaving the OVS driver to bind
ports attached to hypervisors running OVS.

I don't fully understand your comments about  the architecture of neutron.
Most work is delegated to either agents or a backend server. Basically
every ML2 driver pushes the work via an agent notification or an HTTP call
of some sort. If you do want to have a discussion about the architecture of
neutron, please start a new thread. This one is related to developing an
OVN plugin/driver and we have already diverged too far.
On Feb 25, 2015 6:15 PM, loy wolfe loywo...@gmail.com wrote:

 Oh, what you mean is vertical splitting, while I'm talking about
 horizontal splitting.

 I'm a little confused about why Neutron is designed so differently with
 Nova and Cinder. In fact MD could be very simple, delegating nearly all
 things out to agent. Remember Cinder volume manager? The real storage
 backend could also be deployed outside the server farm as the dedicated
 hardware, not necessary the local host based resource. The agent could act
 as the proxy to an outside module, instead of heavy burden on central
 plugin servers, and also, all backend can inter-operate and co-exist
 seamlessly (like a single vxlan across ovs and tor in hybrid deployment)


 On Thu, Feb 26, 2015 at 9:39 AM, Kevin Benton blak...@gmail.com wrote:

 In the cases I'm referring to, OVS handles the security groups and
 vswitch.  The other drivers handle fabric configuration for VLAN tagging to
 the host and whatever other plumbing they want to do.
 On Feb 25, 2015 5:30 PM, loy wolfe loywo...@gmail.com wrote:



 On Thu, Feb 26, 2015 at 3:51 AM, Kevin Benton blak...@gmail.com wrote:

 The fact that a system doesn't use a neutron agent is not a good
 justification for monolithic vs driver. The VLAN drivers co-exist with OVS
 just fine when using VLAN encapsulation even though some are agent-less.

 so how about security group, and all other things which need
 coordination between vswitchs?


  There is a missing way to coordinate connectivity with tunnel
 networks across drivers, but that doesn't mean you can't run multiple
 drivers to handle different types or just to provide additional features
 (auditing,  more access control, etc).
 On Feb 25, 2015 2:04 AM, loy wolfe loywo...@gmail.com wrote:

 +1 to separate monolithic OVN plugin

 The ML2 has been designed for co-existing of multiple heterogeneous
 backends, it works well for all agent solutions: OVS, Linux Bridge, and
 even ofagent.

 However, when things come with all kinds of agentless solutions,
 especially all kinds of SDN controller (except for Ryu-Lib style),
 Mechanism Driver became the new monolithic place despite the benefits of
 code reduction:  MDs can't inter-operate neither between themselves nor
 with ovs/bridge agent L2pop, each MD has its own exclusive vxlan
 mapping/broadcasting solution.

 So my suggestion is that keep those thin MD(with agent) in ML2
 framework (also inter-operate with native Neutron L3/service plugins),
 while all other fat MD(agentless) go with the old style of monolithic
 plugin, with all L2-L7 features tightly integrated.

 On Wed, Feb 25, 2015 at 9:25 AM, Amit Kumar Saha (amisaha) 
 amis...@cisco.com wrote:

  Hi,



 I am new to OpenStack (and am particularly interested in networking).
 I am getting a bit confused by this discussion. Aren’t there already a 
 few
 monolithic plugins (that is what I could understand from reading the
 Networking chapter of the OpenStack Cloud Administrator Guide. Table 7.3
 Available networking plugi-ins)? So how do we have interoperability 
 between
 those (or do we not intend to)?



 BTW, it is funny that the acronym ML can also be used for
 “monolithic” J



 Regards,

 Amit Saha

 Cisco, Bangalore







 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 

Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-25 Thread Kevin Benton
In the cases I'm referring to, OVS handles the security groups and
vswitch.  The other drivers handle fabric configuration for VLAN tagging to
the host and whatever other plumbing they want to do.
On Feb 25, 2015 5:30 PM, loy wolfe loywo...@gmail.com wrote:



 On Thu, Feb 26, 2015 at 3:51 AM, Kevin Benton blak...@gmail.com wrote:

 The fact that a system doesn't use a neutron agent is not a good
 justification for monolithic vs driver. The VLAN drivers co-exist with OVS
 just fine when using VLAN encapsulation even though some are agent-less.

 so how about security group, and all other things which need coordination
 between vswitchs?


  There is a missing way to coordinate connectivity with tunnel networks
 across drivers, but that doesn't mean you can't run multiple drivers to
 handle different types or just to provide additional features (auditing,
 more access control, etc).
 On Feb 25, 2015 2:04 AM, loy wolfe loywo...@gmail.com wrote:

 +1 to separate monolithic OVN plugin

 The ML2 has been designed for co-existing of multiple heterogeneous
 backends, it works well for all agent solutions: OVS, Linux Bridge, and
 even ofagent.

 However, when things come with all kinds of agentless solutions,
 especially all kinds of SDN controller (except for Ryu-Lib style),
 Mechanism Driver became the new monolithic place despite the benefits of
 code reduction:  MDs can't inter-operate neither between themselves nor
 with ovs/bridge agent L2pop, each MD has its own exclusive vxlan
 mapping/broadcasting solution.

 So my suggestion is that keep those thin MD(with agent) in ML2
 framework (also inter-operate with native Neutron L3/service plugins),
 while all other fat MD(agentless) go with the old style of monolithic
 plugin, with all L2-L7 features tightly integrated.

 On Wed, Feb 25, 2015 at 9:25 AM, Amit Kumar Saha (amisaha) 
 amis...@cisco.com wrote:

  Hi,



 I am new to OpenStack (and am particularly interested in networking). I
 am getting a bit confused by this discussion. Aren’t there already a few
 monolithic plugins (that is what I could understand from reading the
 Networking chapter of the OpenStack Cloud Administrator Guide. Table 7.3
 Available networking plugi-ins)? So how do we have interoperability between
 those (or do we not intend to)?



 BTW, it is funny that the acronym ML can also be used for “monolithic”
 J



 Regards,

 Amit Saha

 Cisco, Bangalore







 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-25 Thread loy wolfe
Oh, what you mean is vertical splitting, while I'm talking about horizontal
splitting.

I'm a little confused about why Neutron is designed so differently with
Nova and Cinder. In fact MD could be very simple, delegating nearly all
things out to agent. Remember Cinder volume manager? The real storage
backend could also be deployed outside the server farm as the dedicated
hardware, not necessary the local host based resource. The agent could act
as the proxy to an outside module, instead of heavy burden on central
plugin servers, and also, all backend can inter-operate and co-exist
seamlessly (like a single vxlan across ovs and tor in hybrid deployment)


On Thu, Feb 26, 2015 at 9:39 AM, Kevin Benton blak...@gmail.com wrote:

 In the cases I'm referring to, OVS handles the security groups and
 vswitch.  The other drivers handle fabric configuration for VLAN tagging to
 the host and whatever other plumbing they want to do.
 On Feb 25, 2015 5:30 PM, loy wolfe loywo...@gmail.com wrote:



 On Thu, Feb 26, 2015 at 3:51 AM, Kevin Benton blak...@gmail.com wrote:

 The fact that a system doesn't use a neutron agent is not a good
 justification for monolithic vs driver. The VLAN drivers co-exist with OVS
 just fine when using VLAN encapsulation even though some are agent-less.

 so how about security group, and all other things which need coordination
 between vswitchs?


  There is a missing way to coordinate connectivity with tunnel networks
 across drivers, but that doesn't mean you can't run multiple drivers to
 handle different types or just to provide additional features (auditing,
 more access control, etc).
 On Feb 25, 2015 2:04 AM, loy wolfe loywo...@gmail.com wrote:

 +1 to separate monolithic OVN plugin

 The ML2 has been designed for co-existing of multiple heterogeneous
 backends, it works well for all agent solutions: OVS, Linux Bridge, and
 even ofagent.

 However, when things come with all kinds of agentless solutions,
 especially all kinds of SDN controller (except for Ryu-Lib style),
 Mechanism Driver became the new monolithic place despite the benefits of
 code reduction:  MDs can't inter-operate neither between themselves nor
 with ovs/bridge agent L2pop, each MD has its own exclusive vxlan
 mapping/broadcasting solution.

 So my suggestion is that keep those thin MD(with agent) in ML2
 framework (also inter-operate with native Neutron L3/service plugins),
 while all other fat MD(agentless) go with the old style of monolithic
 plugin, with all L2-L7 features tightly integrated.

 On Wed, Feb 25, 2015 at 9:25 AM, Amit Kumar Saha (amisaha) 
 amis...@cisco.com wrote:

  Hi,



 I am new to OpenStack (and am particularly interested in networking).
 I am getting a bit confused by this discussion. Aren’t there already a few
 monolithic plugins (that is what I could understand from reading the
 Networking chapter of the OpenStack Cloud Administrator Guide. Table 7.3
 Available networking plugi-ins)? So how do we have interoperability 
 between
 those (or do we not intend to)?



 BTW, it is funny that the acronym ML can also be used for “monolithic”
 J



 Regards,

 Amit Saha

 Cisco, Bangalore







 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-25 Thread loy wolfe
On Thu, Feb 26, 2015 at 3:51 AM, Kevin Benton blak...@gmail.com wrote:

 The fact that a system doesn't use a neutron agent is not a good
 justification for monolithic vs driver. The VLAN drivers co-exist with OVS
 just fine when using VLAN encapsulation even though some are agent-less.

so how about security group, and all other things which need coordination
between vswitchs?


  There is a missing way to coordinate connectivity with tunnel networks
 across drivers, but that doesn't mean you can't run multiple drivers to
 handle different types or just to provide additional features (auditing,
 more access control, etc).
 On Feb 25, 2015 2:04 AM, loy wolfe loywo...@gmail.com wrote:

 +1 to separate monolithic OVN plugin

 The ML2 has been designed for co-existing of multiple heterogeneous
 backends, it works well for all agent solutions: OVS, Linux Bridge, and
 even ofagent.

 However, when things come with all kinds of agentless solutions,
 especially all kinds of SDN controller (except for Ryu-Lib style),
 Mechanism Driver became the new monolithic place despite the benefits of
 code reduction:  MDs can't inter-operate neither between themselves nor
 with ovs/bridge agent L2pop, each MD has its own exclusive vxlan
 mapping/broadcasting solution.

 So my suggestion is that keep those thin MD(with agent) in ML2
 framework (also inter-operate with native Neutron L3/service plugins),
 while all other fat MD(agentless) go with the old style of monolithic
 plugin, with all L2-L7 features tightly integrated.

 On Wed, Feb 25, 2015 at 9:25 AM, Amit Kumar Saha (amisaha) 
 amis...@cisco.com wrote:

  Hi,



 I am new to OpenStack (and am particularly interested in networking). I
 am getting a bit confused by this discussion. Aren’t there already a few
 monolithic plugins (that is what I could understand from reading the
 Networking chapter of the OpenStack Cloud Administrator Guide. Table 7.3
 Available networking plugi-ins)? So how do we have interoperability between
 those (or do we not intend to)?



 BTW, it is funny that the acronym ML can also be used for “monolithic” J



 Regards,

 Amit Saha

 Cisco, Bangalore






 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-25 Thread loy wolfe
On Thu, Feb 26, 2015 at 10:50 AM, Kevin Benton blak...@gmail.com wrote:

 You can horizontally split as well (if I understand what axis definitions
 you are using). The Big Switch driver for example will bind ports that
 belong to hypervisors running IVS while leaving the OVS driver to bind
 ports attached to hypervisors running OVS.


That's just what I mean about horizontal, which is limited for some
features. For example, ports belonging to BSN driver and OVS driver can't
communicate with each other in the same tunnel network, neither does
security group across both sides.


  I don't fully understand your comments about  the architecture of
 neutron. Most work is delegated to either agents or a backend server.
 Basically every ML2 driver pushes the work via an agent notification or
 an HTTP call of some sort


Here is the key difference: thin MD such as ovs and bridge never push any
work to agent, which only handle port bind, just as a scheduler selecting
the backend vif type. Those agent notification is handled by other common
code in ML2, so thin MDs can seamlessly be integrated with each other
horizontally for all features, like tunnel l2pop. On the other hand fat MD
will push every work to backend through HTTP call, which partly block
horizontal inter-operation with other backends.

Then I'm thing about this pattern: ML2 /w thin MD - agent - HTTP call to
backend? Which should be much easier for horizontal inter-operate.


On Feb 25, 2015 6:15 PM, loy wolfe loywo...@gmail.com wrote:

 Oh, what you mean is vertical splitting, while I'm talking about
 horizontal splitting.

 I'm a little confused about why Neutron is designed so differently with
 Nova and Cinder. In fact MD could be very simple, delegating nearly all
 things out to agent. Remember Cinder volume manager? The real storage
 backend could also be deployed outside the server farm as the dedicated
 hardware, not necessary the local host based resource. The agent could act
 as the proxy to an outside module, instead of heavy burden on central
 plugin servers, and also, all backend can inter-operate and co-exist
 seamlessly (like a single vxlan across ovs and tor in hybrid deployment)


 On Thu, Feb 26, 2015 at 9:39 AM, Kevin Benton blak...@gmail.com wrote:

 In the cases I'm referring to, OVS handles the security groups and
 vswitch.  The other drivers handle fabric configuration for VLAN tagging to
 the host and whatever other plumbing they want to do.
 On Feb 25, 2015 5:30 PM, loy wolfe loywo...@gmail.com wrote:



 On Thu, Feb 26, 2015 at 3:51 AM, Kevin Benton blak...@gmail.com
 wrote:

 The fact that a system doesn't use a neutron agent is not a good
 justification for monolithic vs driver. The VLAN drivers co-exist with OVS
 just fine when using VLAN encapsulation even though some are agent-less.

 so how about security group, and all other things which need
 coordination between vswitchs?


  There is a missing way to coordinate connectivity with tunnel
 networks across drivers, but that doesn't mean you can't run multiple
 drivers to handle different types or just to provide additional features
 (auditing,  more access control, etc).
 On Feb 25, 2015 2:04 AM, loy wolfe loywo...@gmail.com wrote:

 +1 to separate monolithic OVN plugin

 The ML2 has been designed for co-existing of multiple heterogeneous
 backends, it works well for all agent solutions: OVS, Linux Bridge, and
 even ofagent.

 However, when things come with all kinds of agentless solutions,
 especially all kinds of SDN controller (except for Ryu-Lib style),
 Mechanism Driver became the new monolithic place despite the benefits of
 code reduction:  MDs can't inter-operate neither between themselves nor
 with ovs/bridge agent L2pop, each MD has its own exclusive vxlan
 mapping/broadcasting solution.

 So my suggestion is that keep those thin MD(with agent) in ML2
 framework (also inter-operate with native Neutron L3/service plugins),
 while all other fat MD(agentless) go with the old style of monolithic
 plugin, with all L2-L7 features tightly integrated.

 On Wed, Feb 25, 2015 at 9:25 AM, Amit Kumar Saha (amisaha) 
 amis...@cisco.com wrote:

  Hi,



 I am new to OpenStack (and am particularly interested in
 networking). I am getting a bit confused by this discussion. Aren’t 
 there
 already a few monolithic plugins (that is what I could understand from
 reading the Networking chapter of the OpenStack Cloud Administrator 
 Guide.
 Table 7.3 Available networking plugi-ins)? So how do we have
 interoperability between those (or do we not intend to)?



 BTW, it is funny that the acronym ML can also be used for
 “monolithic” J



 Regards,

 Amit Saha

 Cisco, Bangalore







 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 

Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-25 Thread Russell Bryant
On 02/24/2015 05:38 PM, Kevin Benton wrote:
 OVN implementing it's own control plane isn't a good reason to make it a
 monolithic plugin. Many of the ML2 drivers are for technologies with
 their own control plane.
 
 Going with the monolithic plugin only makes sense if you are certain
 that you never want interoperability with other technologies at the
 Neutron level. Instead of ruling that out this early, why not make it as
 an ML2 driver and then change to a monolithic plugin if you run into
 some fundamental issue with ML2?

That was my original thinking.  I figure the important code of the ML2
driver could be reused if/when the switch is needed.  I'd really just
take the quicker path to making something work unless it's obvious that
ML2 isn't the right path.  As this thread is still ongoing, it certainly
doesn't seem obvious.

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-25 Thread Fawad Khaliq
On Wed, Feb 25, 2015 at 5:34 AM, Sukhdev Kapur sukhdevka...@gmail.com
wrote:

 Folks,

 A great discussion. I am not expert at OVN, hence, want to ask a question.
 The answer may make a  case that it should probably be a ML2 driver as
 oppose to monolithic plugin.

 Say a customer want to deploy an OVN based solution and use HW devices
 from one vendor for L2 and L3 (e.g. Arista or Cisco), and want to use
 another vendor for services (e.g. F5 or A10) - how can that be supported?

 If OVN goes in as ML2 driver, I can then run ML2 and Service plugin to
 achieve above solution. For a monolithic plugin, don't I have an issue?

On the specifics of service plugins: service plugins and standalone plugins
can co-exist to provide a solution with advanced services from different
vendors. Some existing monolithic plugins (e.g. PLUMgrid) have blueprints
deployed using this approach.


 regards..
 -Sukhdev


 On Tue, Feb 24, 2015 at 8:58 AM, Salvatore Orlando sorla...@nicira.com
 wrote:

 I think we're speculating a lot about what would be best for OVN whereas
 we should probably just expose pro and cons of ML2 drivers vs standalone
 plugin (as I said earlier on indeed it does not necessarily imply
 monolithic *)

 I reckon the job of the Neutron community is to provide a full picture to
 OVN developers - so that they could make a call on the integration strategy
 that best suits them.
 On the other hand, if we're planning to commit to a model where ML2 is
 not anymore a plugin but the interface with the API layer, then any choice
 which is not a ML2 driver does not make any sense. Personally I'm not sure
 we ever want to do that, at least not in the near/medium term, but I'm one
 and hardly representative of the developer/operator communities.

 Salvatore


 * In particular with the advanced service split out the term monolithic
 simply does not mean anything anymore.

 On 24 February 2015 at 17:48, Robert Kukura kuk...@noironetworks.com
 wrote:

  Kyle, What happened to the long-term potential goal of ML2 driver APIs
 becoming neutron's core APIs? Do we really want to encourage new monolithic
 plugins?

 ML2 is not a control plane - its really just an integration point for
 control planes. Although co-existence of multiple mechanism drivers is
 possible, and sometimes very useful, the single-driver case is fully
 supported. Even with hierarchical bindings, its not really ML2 that
 controls what happens - its the drivers within the framework. I don't think
 ML2 really limits what drivers can do, as long as a virtual network can be
 described as a set of static and possibly dynamic network segments. ML2 is
 intended to impose as few constraints on drivers as possible.

 My recommendation would be to implement an ML2 mechanism driver for OVN,
 along with any needed new type drivers or extension drivers. I believe this
 will result in a lot less new code to write and maintain.

 Also, keep in mind that even if multiple driver co-existence doesn't
 sound immediately useful, there are several potential use cases to
 consider. One is that it allows new technology to be introduced into an
 existing cloud alongside what previously existed. Migration from one ML2
 driver to another may be a lot simpler (and/or flexible) than migration
 from one plugin to another. Another is that additional drivers can support
 special cases, such as bare metal, appliances, etc..

 -Bob


 On 2/24/15 11:11 AM, Kyle Mestery wrote:

  On Tue, Feb 24, 2015 at 3:19 AM, Salvatore Orlando sorla...@nicira.com
  wrote:

  On 24 February 2015 at 01:34, Kyle Mestery mest...@mestery.com
 wrote:

  Russel and I have already merged the initial ML2 skeleton driver [1].

   The thinking is that we can always revert to a non-ML2 driver if
 needed.


  If nothing else an authoritative decision on a design direction saves
 us the hassle of going through iterations and discussions.
 The integration through ML2 is definitely viable. My opinion however is
 that since OVN implements a full control plane, the control plane bits
 provided by ML2 are not necessary, and a plugin which provides only
 management layer capabilities might be the best solution. Note: this does
 not mean it has to be monolithic. We can still do L3 with a service plugin.
  However, since the same kind of approach has been adopted for ODL I
 guess this provides some sort of validation.


 To be honest, after thinking about this last night, I'm now leaning
 towards doing this as a full plugin. I don't really envision OVN running
 with other plugins, as OVN is implementing it's own control plane, as you
 say. So the value of using ML2 is quesitonable.


I'm not sure how useful having using OVN with other drivers will
 be, and that was my initial concern with doing ML2 vs. full plugin. With
 the HW VTEP support in OVN+OVS, you can tie in physical devices this way.
 Anyways, this is where we're at for now. Comments welcome, of course.


  That was also kind of my point regarding the control 

Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-25 Thread loy wolfe
+1 to separate monolithic OVN plugin

The ML2 has been designed for co-existing of multiple heterogeneous
backends, it works well for all agent solutions: OVS, Linux Bridge, and
even ofagent.

However, when things come with all kinds of agentless solutions, especially
all kinds of SDN controller (except for Ryu-Lib style), Mechanism Driver
became the new monolithic place despite the benefits of code reduction:
 MDs can't inter-operate neither between themselves nor with ovs/bridge
agent L2pop, each MD has its own exclusive vxlan mapping/broadcasting
solution.

So my suggestion is that keep those thin MD(with agent) in ML2 framework
(also inter-operate with native Neutron L3/service plugins), while all
other fat MD(agentless) go with the old style of monolithic plugin, with
all L2-L7 features tightly integrated.

On Wed, Feb 25, 2015 at 9:25 AM, Amit Kumar Saha (amisaha) 
amis...@cisco.com wrote:

  Hi,



 I am new to OpenStack (and am particularly interested in networking). I am
 getting a bit confused by this discussion. Aren’t there already a few
 monolithic plugins (that is what I could understand from reading the
 Networking chapter of the OpenStack Cloud Administrator Guide. Table 7.3
 Available networking plugi-ins)? So how do we have interoperability between
 those (or do we not intend to)?



 BTW, it is funny that the acronym ML can also be used for “monolithic” J



 Regards,

 Amit Saha

 Cisco, Bangalore





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-24 Thread Kyle Mestery
On Tue, Feb 24, 2015 at 3:19 AM, Salvatore Orlando sorla...@nicira.com
wrote:

 On 24 February 2015 at 01:34, Kyle Mestery mest...@mestery.com wrote:

 Russel and I have already merged the initial ML2 skeleton driver [1].

 The thinking is that we can always revert to a non-ML2 driver if needed.


 If nothing else an authoritative decision on a design direction saves us
 the hassle of going through iterations and discussions.
 The integration through ML2 is definitely viable. My opinion however is
 that since OVN implements a full control plane, the control plane bits
 provided by ML2 are not necessary, and a plugin which provides only
 management layer capabilities might be the best solution. Note: this does
 not mean it has to be monolithic. We can still do L3 with a service plugin.
 However, since the same kind of approach has been adopted for ODL I guess
 this provides some sort of validation.


To be honest, after thinking about this last night, I'm now leaning towards
doing this as a full plugin. I don't really envision OVN running with other
plugins, as OVN is implementing it's own control plane, as you say. So the
value of using ML2 is quesitonable.


 I'm not sure how useful having using OVN with other drivers will be, and
 that was my initial concern with doing ML2 vs. full plugin. With the HW
 VTEP support in OVN+OVS, you can tie in physical devices this way. Anyways,
 this is where we're at for now. Comments welcome, of course.


 That was also kind of my point regarding the control plane bits provided
 by ML2 which OVN does not need. Still, the fact that we do not use a
 function does not make any harm.
 Also i'm not sure if OVN needs at all a type manager. If not, we can
 always implement a no-op type manager, I guess.

 See above. I'd like to propose we move OVN to a full plugin instead of an
ML2 MechanismDriver.

Kyle


 Salvatore



 Thanks,
 Kyle

 [1] https://github.com/stackforge/networking-ovn

 On Mon, Feb 23, 2015 at 4:09 PM, Kevin Benton blak...@gmail.com wrote:

 I want to emphasize Salvatore's last two points a bit more. If you go
 with a monolithic plugin, you eliminate the possibility of heterogenous
 deployments.

 One example of this that is common now is having the current OVS driver
 responsible for setting up the vswitch and then having a ToR driver (e.g.
 Big Switch, Arista, etc) responsible for setting up the fabric.
 Additionally, there is a separate L3 plugin (e.g. the reference one,
 Vyatta, etc) for providing routing.

 I suppose with an overlay it's easier to take the route that you don't
 want to be compatible with other networking stuff at the Neutron layer
 (e.g. integration with the 3rd parties is orchestrated somewhere else). In
 that case, the above scenario wouldn't make much sense to support, but it's
 worth keeping in mind.

 On Mon, Feb 23, 2015 at 10:28 AM, Salvatore Orlando sorla...@nicira.com
  wrote:

 I think there are a few factors which influence the ML2 driver vs
 monolithic plugin debate, and they mostly depend on OVN rather than
 Neutron.
 From a Neutron perspective both plugins and drivers (as long at they
 live in their own tree) will be supported in the foreseeable future. If a
 ML2 mech driver is not the best option for OVN that would be ok - I don't
 think the Neutron community advices development of a ML2 driver as the
 preferred way for integrating with new backends.

 The ML2 framework provides a long list of benefits that mechanism
 driver developer can leverage.
 Among those:
 - The ability of leveraging Type drivers which relieves driver
 developers from dealing with network segment allocation
 - Post-commit and (for most operations) pre-commit hooks for performing
 operation on the backend
 - The ability to leverage some of the features offered by Neutron's
 built-in control-plane such as L2-population
 - A flexible mechanism for enabling driver-specific API extensions
 - Promotes modular development and integration with higher-layer
 services, such as L3. For instance OVN could provide the L2 support for
 Neutron's built-in L3 control plane
 - The (potential afaict) ability of interacting with other mechanism
 driver such as those operating on physical appliances on the data center
 - add your benefit here

 In my opinion OVN developers should look at ML2 benefits, and check
 which ones apply to this specific platform. I'd say that if there are 1 or
 2 checks in the above list, maybe it would be the case to look at
 developing a ML2 mechanism driver, and perhaps a L3 service plugin.
 It is worth nothing that ML2, thanks to its type and mechanism driver
 provides also some control plane capabilities. If those capabilities are
 however on OVN's roadmap it might be instead worth looking at a
 monolithic plugin, which can also be easily implemented by inheriting
 from neutron.db.db_base_plugin_v2.NeutronDbPluginV2, and then adding all
 the python mixins for the extensions the plugin needs to support.

 Salvatore


 On 23 February 2015 at 

Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-24 Thread Kevin Benton
OVN implementing it's own control plane isn't a good reason to make it a
monolithic plugin. Many of the ML2 drivers are for technologies with their
own control plane.

Going with the monolithic plugin only makes sense if you are certain that
you never want interoperability with other technologies at the Neutron
level. Instead of ruling that out this early, why not make it as an ML2
driver and then change to a monolithic plugin if you run into some
fundamental issue with ML2?
On Feb 24, 2015 8:16 AM, Kyle Mestery mest...@mestery.com wrote:

 On Tue, Feb 24, 2015 at 3:19 AM, Salvatore Orlando sorla...@nicira.com
 wrote:

 On 24 February 2015 at 01:34, Kyle Mestery mest...@mestery.com wrote:

 Russel and I have already merged the initial ML2 skeleton driver [1].

 The thinking is that we can always revert to a non-ML2 driver if needed.


 If nothing else an authoritative decision on a design direction saves us
 the hassle of going through iterations and discussions.
 The integration through ML2 is definitely viable. My opinion however is
 that since OVN implements a full control plane, the control plane bits
 provided by ML2 are not necessary, and a plugin which provides only
 management layer capabilities might be the best solution. Note: this does
 not mean it has to be monolithic. We can still do L3 with a service plugin.
 However, since the same kind of approach has been adopted for ODL I guess
 this provides some sort of validation.


 To be honest, after thinking about this last night, I'm now leaning
 towards doing this as a full plugin. I don't really envision OVN running
 with other plugins, as OVN is implementing it's own control plane, as you
 say. So the value of using ML2 is quesitonable.


 I'm not sure how useful having using OVN with other drivers will be, and
 that was my initial concern with doing ML2 vs. full plugin. With the HW
 VTEP support in OVN+OVS, you can tie in physical devices this way. Anyways,
 this is where we're at for now. Comments welcome, of course.


 That was also kind of my point regarding the control plane bits provided
 by ML2 which OVN does not need. Still, the fact that we do not use a
 function does not make any harm.
 Also i'm not sure if OVN needs at all a type manager. If not, we can
 always implement a no-op type manager, I guess.

 See above. I'd like to propose we move OVN to a full plugin instead of an
 ML2 MechanismDriver.

 Kyle


 Salvatore



 Thanks,
 Kyle

 [1] https://github.com/stackforge/networking-ovn

 On Mon, Feb 23, 2015 at 4:09 PM, Kevin Benton blak...@gmail.com wrote:

 I want to emphasize Salvatore's last two points a bit more. If you go
 with a monolithic plugin, you eliminate the possibility of heterogenous
 deployments.

 One example of this that is common now is having the current OVS driver
 responsible for setting up the vswitch and then having a ToR driver (e.g.
 Big Switch, Arista, etc) responsible for setting up the fabric.
 Additionally, there is a separate L3 plugin (e.g. the reference one,
 Vyatta, etc) for providing routing.

 I suppose with an overlay it's easier to take the route that you don't
 want to be compatible with other networking stuff at the Neutron layer
 (e.g. integration with the 3rd parties is orchestrated somewhere else). In
 that case, the above scenario wouldn't make much sense to support, but it's
 worth keeping in mind.

 On Mon, Feb 23, 2015 at 10:28 AM, Salvatore Orlando 
 sorla...@nicira.com wrote:

 I think there are a few factors which influence the ML2 driver vs
 monolithic plugin debate, and they mostly depend on OVN rather than
 Neutron.
 From a Neutron perspective both plugins and drivers (as long at they
 live in their own tree) will be supported in the foreseeable future. If a
 ML2 mech driver is not the best option for OVN that would be ok - I don't
 think the Neutron community advices development of a ML2 driver as the
 preferred way for integrating with new backends.

 The ML2 framework provides a long list of benefits that mechanism
 driver developer can leverage.
 Among those:
 - The ability of leveraging Type drivers which relieves driver
 developers from dealing with network segment allocation
 - Post-commit and (for most operations) pre-commit hooks for
 performing operation on the backend
 - The ability to leverage some of the features offered by Neutron's
 built-in control-plane such as L2-population
 - A flexible mechanism for enabling driver-specific API extensions
 - Promotes modular development and integration with higher-layer
 services, such as L3. For instance OVN could provide the L2 support for
 Neutron's built-in L3 control plane
 - The (potential afaict) ability of interacting with other mechanism
 driver such as those operating on physical appliances on the data center
 - add your benefit here

 In my opinion OVN developers should look at ML2 benefits, and check
 which ones apply to this specific platform. I'd say that if there are 1 or
 2 checks in the above list, maybe it 

Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-24 Thread Salvatore Orlando
I think we're speculating a lot about what would be best for OVN whereas we
should probably just expose pro and cons of ML2 drivers vs standalone
plugin (as I said earlier on indeed it does not necessarily imply
monolithic *)

I reckon the job of the Neutron community is to provide a full picture to
OVN developers - so that they could make a call on the integration strategy
that best suits them.
On the other hand, if we're planning to commit to a model where ML2 is not
anymore a plugin but the interface with the API layer, then any choice
which is not a ML2 driver does not make any sense. Personally I'm not sure
we ever want to do that, at least not in the near/medium term, but I'm one
and hardly representative of the developer/operator communities.

Salvatore


* In particular with the advanced service split out the term monolithic
simply does not mean anything anymore.

On 24 February 2015 at 17:48, Robert Kukura kuk...@noironetworks.com
wrote:

  Kyle, What happened to the long-term potential goal of ML2 driver APIs
 becoming neutron's core APIs? Do we really want to encourage new monolithic
 plugins?

 ML2 is not a control plane - its really just an integration point for
 control planes. Although co-existence of multiple mechanism drivers is
 possible, and sometimes very useful, the single-driver case is fully
 supported. Even with hierarchical bindings, its not really ML2 that
 controls what happens - its the drivers within the framework. I don't think
 ML2 really limits what drivers can do, as long as a virtual network can be
 described as a set of static and possibly dynamic network segments. ML2 is
 intended to impose as few constraints on drivers as possible.

 My recommendation would be to implement an ML2 mechanism driver for OVN,
 along with any needed new type drivers or extension drivers. I believe this
 will result in a lot less new code to write and maintain.

 Also, keep in mind that even if multiple driver co-existence doesn't sound
 immediately useful, there are several potential use cases to consider. One
 is that it allows new technology to be introduced into an existing cloud
 alongside what previously existed. Migration from one ML2 driver to another
 may be a lot simpler (and/or flexible) than migration from one plugin to
 another. Another is that additional drivers can support special cases, such
 as bare metal, appliances, etc..

 -Bob


 On 2/24/15 11:11 AM, Kyle Mestery wrote:

  On Tue, Feb 24, 2015 at 3:19 AM, Salvatore Orlando sorla...@nicira.com
 wrote:

  On 24 February 2015 at 01:34, Kyle Mestery mest...@mestery.com wrote:

  Russel and I have already merged the initial ML2 skeleton driver [1].

   The thinking is that we can always revert to a non-ML2 driver if
 needed.


  If nothing else an authoritative decision on a design direction saves
 us the hassle of going through iterations and discussions.
 The integration through ML2 is definitely viable. My opinion however is
 that since OVN implements a full control plane, the control plane bits
 provided by ML2 are not necessary, and a plugin which provides only
 management layer capabilities might be the best solution. Note: this does
 not mean it has to be monolithic. We can still do L3 with a service plugin.
  However, since the same kind of approach has been adopted for ODL I
 guess this provides some sort of validation.


 To be honest, after thinking about this last night, I'm now leaning
 towards doing this as a full plugin. I don't really envision OVN running
 with other plugins, as OVN is implementing it's own control plane, as you
 say. So the value of using ML2 is quesitonable.


I'm not sure how useful having using OVN with other drivers will be,
 and that was my initial concern with doing ML2 vs. full plugin. With the HW
 VTEP support in OVN+OVS, you can tie in physical devices this way. Anyways,
 this is where we're at for now. Comments welcome, of course.


  That was also kind of my point regarding the control plane bits
 provided by ML2 which OVN does not need. Still, the fact that we do not use
 a function does not make any harm.
 Also i'm not sure if OVN needs at all a type manager. If not, we can
 always implement a no-op type manager, I guess.

See above. I'd like to propose we move OVN to a full plugin instead
 of an ML2 MechanismDriver.

  Kyle


   Salvatore



  Thanks,
  Kyle

 [1] https://github.com/stackforge/networking-ovn

 On Mon, Feb 23, 2015 at 4:09 PM, Kevin Benton blak...@gmail.com wrote:

 I want to emphasize Salvatore's last two points a bit more. If you go
 with a monolithic plugin, you eliminate the possibility of heterogenous
 deployments.

  One example of this that is common now is having the current OVS
 driver responsible for setting up the vswitch and then having a ToR driver
 (e.g. Big Switch, Arista, etc) responsible for setting up the fabric.
 Additionally, there is a separate L3 plugin (e.g. the reference one,
 Vyatta, etc) for providing routing.

  I suppose with 

Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-24 Thread Amit Kumar Saha (amisaha)
Hi,

I am new to OpenStack (and am particularly interested in networking). I am 
getting a bit confused by this discussion. Aren’t there already a few 
monolithic plugins (that is what I could understand from reading the Networking 
chapter of the OpenStack Cloud Administrator Guide. Table 7.3 Available 
networking plugi-ins)? So how do we have interoperability between those (or do 
we not intend to)?

BTW, it is funny that the acronym ML can also be used for “monolithic” ☺

Regards,
Amit Saha
Cisco, Bangalore



From: Sukhdev Kapur [mailto:sukhdevka...@gmail.com]
Sent: Wednesday, February 25, 2015 6:04 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

Folks,

A great discussion. I am not expert at OVN, hence, want to ask a question. The 
answer may make a  case that it should probably be a ML2 driver as oppose to 
monolithic plugin.

Say a customer want to deploy an OVN based solution and use HW devices from one 
vendor for L2 and L3 (e.g. Arista or Cisco), and want to use another vendor for 
services (e.g. F5 or A10) - how can that be supported?

If OVN goes in as ML2 driver, I can then run ML2 and Service plugin to achieve 
above solution. For a monolithic plugin, don't I have an issue?

regards..
-Sukhdev


On Tue, Feb 24, 2015 at 8:58 AM, Salvatore Orlando 
sorla...@nicira.commailto:sorla...@nicira.com wrote:
I think we're speculating a lot about what would be best for OVN whereas we 
should probably just expose pro and cons of ML2 drivers vs standalone plugin 
(as I said earlier on indeed it does not necessarily imply monolithic *)

I reckon the job of the Neutron community is to provide a full picture to OVN 
developers - so that they could make a call on the integration strategy that 
best suits them.
On the other hand, if we're planning to commit to a model where ML2 is not 
anymore a plugin but the interface with the API layer, then any choice which is 
not a ML2 driver does not make any sense. Personally I'm not sure we ever want 
to do that, at least not in the near/medium term, but I'm one and hardly 
representative of the developer/operator communities.

Salvatore


* In particular with the advanced service split out the term monolithic simply 
does not mean anything anymore.

On 24 February 2015 at 17:48, Robert Kukura 
kuk...@noironetworks.commailto:kuk...@noironetworks.com wrote:
Kyle, What happened to the long-term potential goal of ML2 driver APIs becoming 
neutron's core APIs? Do we really want to encourage new monolithic plugins?

ML2 is not a control plane - its really just an integration point for control 
planes. Although co-existence of multiple mechanism drivers is possible, and 
sometimes very useful, the single-driver case is fully supported. Even with 
hierarchical bindings, its not really ML2 that controls what happens - its the 
drivers within the framework. I don't think ML2 really limits what drivers can 
do, as long as a virtual network can be described as a set of static and 
possibly dynamic network segments. ML2 is intended to impose as few constraints 
on drivers as possible.

My recommendation would be to implement an ML2 mechanism driver for OVN, along 
with any needed new type drivers or extension drivers. I believe this will 
result in a lot less new code to write and maintain.

Also, keep in mind that even if multiple driver co-existence doesn't sound 
immediately useful, there are several potential use cases to consider. One is 
that it allows new technology to be introduced into an existing cloud alongside 
what previously existed. Migration from one ML2 driver to another may be a lot 
simpler (and/or flexible) than migration from one plugin to another. Another is 
that additional drivers can support special cases, such as bare metal, 
appliances, etc..

-Bob

On 2/24/15 11:11 AM, Kyle Mestery wrote:
On Tue, Feb 24, 2015 at 3:19 AM, Salvatore Orlando 
sorla...@nicira.commailto:sorla...@nicira.com wrote:
On 24 February 2015 at 01:34, Kyle Mestery 
mest...@mestery.commailto:mest...@mestery.com wrote:
Russel and I have already merged the initial ML2 skeleton driver [1].
The thinking is that we can always revert to a non-ML2 driver if needed.

If nothing else an authoritative decision on a design direction saves us the 
hassle of going through iterations and discussions.
The integration through ML2 is definitely viable. My opinion however is that 
since OVN implements a full control plane, the control plane bits provided by 
ML2 are not necessary, and a plugin which provides only management layer 
capabilities might be the best solution. Note: this does not mean it has to be 
monolithic. We can still do L3 with a service plugin.
However, since the same kind of approach has been adopted for ODL I guess this 
provides some sort of validation.

To be honest, after thinking about this last night, I'm now leaning towards 
doing this as a full plugin. I don't really envision OVN

Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-24 Thread Sukhdev Kapur
Folks,

A great discussion. I am not expert at OVN, hence, want to ask a question.
The answer may make a  case that it should probably be a ML2 driver as
oppose to monolithic plugin.

Say a customer want to deploy an OVN based solution and use HW devices from
one vendor for L2 and L3 (e.g. Arista or Cisco), and want to use another
vendor for services (e.g. F5 or A10) - how can that be supported?

If OVN goes in as ML2 driver, I can then run ML2 and Service plugin to
achieve above solution. For a monolithic plugin, don't I have an issue?

regards..
-Sukhdev


On Tue, Feb 24, 2015 at 8:58 AM, Salvatore Orlando sorla...@nicira.com
wrote:

 I think we're speculating a lot about what would be best for OVN whereas
 we should probably just expose pro and cons of ML2 drivers vs standalone
 plugin (as I said earlier on indeed it does not necessarily imply
 monolithic *)

 I reckon the job of the Neutron community is to provide a full picture to
 OVN developers - so that they could make a call on the integration strategy
 that best suits them.
 On the other hand, if we're planning to commit to a model where ML2 is not
 anymore a plugin but the interface with the API layer, then any choice
 which is not a ML2 driver does not make any sense. Personally I'm not sure
 we ever want to do that, at least not in the near/medium term, but I'm one
 and hardly representative of the developer/operator communities.

 Salvatore


 * In particular with the advanced service split out the term monolithic
 simply does not mean anything anymore.

 On 24 February 2015 at 17:48, Robert Kukura kuk...@noironetworks.com
 wrote:

  Kyle, What happened to the long-term potential goal of ML2 driver APIs
 becoming neutron's core APIs? Do we really want to encourage new monolithic
 plugins?

 ML2 is not a control plane - its really just an integration point for
 control planes. Although co-existence of multiple mechanism drivers is
 possible, and sometimes very useful, the single-driver case is fully
 supported. Even with hierarchical bindings, its not really ML2 that
 controls what happens - its the drivers within the framework. I don't think
 ML2 really limits what drivers can do, as long as a virtual network can be
 described as a set of static and possibly dynamic network segments. ML2 is
 intended to impose as few constraints on drivers as possible.

 My recommendation would be to implement an ML2 mechanism driver for OVN,
 along with any needed new type drivers or extension drivers. I believe this
 will result in a lot less new code to write and maintain.

 Also, keep in mind that even if multiple driver co-existence doesn't
 sound immediately useful, there are several potential use cases to
 consider. One is that it allows new technology to be introduced into an
 existing cloud alongside what previously existed. Migration from one ML2
 driver to another may be a lot simpler (and/or flexible) than migration
 from one plugin to another. Another is that additional drivers can support
 special cases, such as bare metal, appliances, etc..

 -Bob


 On 2/24/15 11:11 AM, Kyle Mestery wrote:

  On Tue, Feb 24, 2015 at 3:19 AM, Salvatore Orlando sorla...@nicira.com
 wrote:

  On 24 February 2015 at 01:34, Kyle Mestery mest...@mestery.com wrote:

  Russel and I have already merged the initial ML2 skeleton driver [1].

   The thinking is that we can always revert to a non-ML2 driver if
 needed.


  If nothing else an authoritative decision on a design direction saves
 us the hassle of going through iterations and discussions.
 The integration through ML2 is definitely viable. My opinion however is
 that since OVN implements a full control plane, the control plane bits
 provided by ML2 are not necessary, and a plugin which provides only
 management layer capabilities might be the best solution. Note: this does
 not mean it has to be monolithic. We can still do L3 with a service plugin.
  However, since the same kind of approach has been adopted for ODL I
 guess this provides some sort of validation.


 To be honest, after thinking about this last night, I'm now leaning
 towards doing this as a full plugin. I don't really envision OVN running
 with other plugins, as OVN is implementing it's own control plane, as you
 say. So the value of using ML2 is quesitonable.


I'm not sure how useful having using OVN with other drivers will be,
 and that was my initial concern with doing ML2 vs. full plugin. With the HW
 VTEP support in OVN+OVS, you can tie in physical devices this way. Anyways,
 this is where we're at for now. Comments welcome, of course.


  That was also kind of my point regarding the control plane bits
 provided by ML2 which OVN does not need. Still, the fact that we do not use
 a function does not make any harm.
 Also i'm not sure if OVN needs at all a type manager. If not, we can
 always implement a no-op type manager, I guess.

See above. I'd like to propose we move OVN to a full plugin instead
 of an ML2 MechanismDriver.