[openstack-dev] [neutron] [ml2] Any other patch needed for hierarchical port binding?

2015-07-14 Thread loy wolfe
We want to try hierarchical port binding for some TOR vtep. However it
seems that the patch [1] could not be applied directly onto JUNO
release. So are there any other patch needed when merging this patch?

Thanks a lot!

[1] https://blueprints.launchpad.net/neutron/+spec/ml2-hierarchical-port-binding

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][vpnaas] Supporting multiple local subnets for VPN connections

2015-05-20 Thread loy wolfe
Extra subnets is suitable for attribute of IKE connections, but not
the whole VPN service. Because customer usually want fine-grained
control of which local subnet could communicate to remote subnets.

On Wed, May 20, 2015 at 11:21 PM, Mathieu Rohon mathieu.ro...@gmail.com wrote:
 However after thinking about it more deeply, option A might be suitable if
 the vpn-service becomes more generic, and usable by other vpn objects
 (ipsec-site-connection, bgpvpn-connection).
 I would become an object that the tenant can update to attach CIDR it wants
 to export in its VPN.
 To transform the vpn-service object in a generic vpn description, we need to
 remove the mandatory ROUTER attribute, so that we can use it for l2vpn
 either.

 Hope we can discuss that on friday morning

 On Wed, May 20, 2015 at 5:12 PM, Paul Michali p...@michali.net wrote:

 Hi Mathieu,

 In Kilo, VPNaaS APIs were no longer marked as experimental. We need to
 understand the implication of changing the API. I'm not sure how much VPNaaS
 is being used by OpenStack users at this time, either.  I'm hoping to seek
 out answers to this at the summit.

 If anyone has suggestions, comments, information on this, please chime in.
 I'll likely make a BP for multiple subnets, when I get back from the summit.

 Lastly, I'm planning on trying to get people interested in VPN to meet on
 Friday morning at the summit to discuss all the VPN topics that have been
 coming up.

 Regards,

 Paul Michali (pc_m)

 On Wed, May 20, 2015 at 7:54 AM Mathieu Rohon mathieu.ro...@gmail.com
 wrote:

 Hi paul,

 this is also something that we would like to introduce for BGP/MPLS VPNs
 [2].

 We choose to allow tenants to attach existing networks ( it might evolve
 to subnets) to bgpvpn-connection objects, by updating the bgpvpn-connection
 object, which is the equivalent of the ipsec-site-connection object.

 So I think that Option B is suitable here.

 Concerning backward compatibility, I think VPNaas is still considered as
 experimental, am I wrong? Do you have to provide backward compatbility in
 this case?

 Mathieu

 [2] https://review.openstack.org/#/c/177740/

 On Wed, May 13, 2015 at 8:59 PM, Paul Michali p...@michali.net wrote:

 Hi,

 There has been, over the years, some mention about having VPN IPSec
 connections supporting multiple CIDRs for local (left side) private
 networks, in addition to the current support for multiple peer CIDRs (right
 side).

 I'm raising the question again with these goals:

 1) Determine if the reference IPSec implementations support this
 capability
 2) Determine if there is a community desire to enhance VPN to support
 this capability (for all VPN types)
 3) See what would be the best way to handle this (changes to API and
 model)
 4) Identify any consequences of making this change.

 Note: My assumption here is something that could be used for any type of
 VPN connection - current IPSec, future BGP/MPLS VPN, DM VPN, etc.

 Here is some information that was gathered from people on the VPN team
 so far. Please correct any inaccuracies and comment on the items...

 (1) It looks like OpenSwan and Libreswan will support this capability.
 StrongSwan will support this with IKEv2. For IKEv1, a Cisco Unity plugin
 extensions is needed. I'm not sure what that implies [1].

 (2) Do we, as a community, want to enhance VPNaaS to provide this
 capability of N:M subnets for VPN implementations? Putting on my vendor 
 hat,
 I can see cases where customers want to be able to only create one
 connection and reference multiple subnets on each end. Is there a desire to
 do this and bake it into the reference implementation (thus making it
 available for other implementations)?

 (3) Currently, the vpn service API includes the router and subnet ID.
 The IPSec connection command includes the peer CIDR(s). For reference, here
 are two of the APIs:

 usage: neutron vpn-service-create [-h] [-f
 {html,json,shell,table,value,yaml}]
   [-c COLUMN] [--max-width integer]
   [--prefix PREFIX]
   [--request-format {json,xml}]
   [--tenant-id TENANT_ID]
 [--admin-state-down]
   [--name NAME] [--description
 DESCRIPTION]
   ROUTER SUBNET

 usage: neutron ipsec-site-connection-create [-h]
 [-f
 {html,json,shell,table,value,yaml}]
 [-c COLUMN]
 [--max-width integer]
 [--prefix PREFIX]
 [--request-format
 {json,xml}]
 [--tenant-id TENANT_ID]
 [--admin-state-down] [--name
 NAME]
 [--description DESCRIPTION]
 

[openstack-dev] [all] [cross-project] Any Openstack partitioning movements in this summit?

2015-04-29 Thread loy wolfe
Hi,

Nova cell has emerged for several release cycles, with the mission of
scalability. Now there are claims of Neutron cell. Maybe similar
Cinder/Ceilometer partition demands would appear in the future.

So I wander if cross-project Openstack partitioning would go into the
our sight in the near term, with the following topics for example:

a) Any other value for partitioning besides scalability? e.g. fault
domain isolation, heterogeneous integration...
b) Which projects need partitioning besides Nova?
Cidner/Neutron/Ceilometer/Glance/Keystone?
c) Standalone partition design and deploy for each project, or some
collaboration is need across them?  e.g. Can a host belong to one Nova
partition and another Cinder partition?
d) Concept clarifying and instructions on different partition
granularity, e.g. Cell, Available Zone, Aggregator...
e) Interface choice between parent and child partitions, internal RPC
or external REST, or some other protocols?


Best Regards

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] A big tent home for Neutron backend code

2015-04-29 Thread loy wolfe
On Thu, Apr 30, 2015 at 2:44 AM, Russell Bryant rbry...@redhat.com wrote:
 On 04/29/2015 01:25 PM, Doug Wiegley wrote:
 My take on the “where does it fit” yardstick:

 Does it stand on its own and add value? Then consider it a standalone
 project, *or* part of neutron if you and neutron agree that it fits.

 Does it *require* neutron to be useful? Then consider having it under
 the neutron umbrella/stadium/tent/yurt.

 ...arena/coliseum/dome...

 That's a nice summary.  Thanks.  :-)

 --
 Russell Bryant



By this definition, nearly all standalone SDN controller would not be
classified as neutron tent (including OVN by its design doc), because
they are not born for neutron at all. It seems that the only exception
is ofagent.

While most hardware MD can be treated as under neutron tent, for they
just do one thing: driver hardware on behalf on neutron.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [Nova] [Cinder] [tc] Should Openstack project maintained by core team keep only API/DB in the future?

2015-04-29 Thread loy wolfe
On Wed, Apr 29, 2015 at 12:34 PM, Fox, Kevin M kevin@pnnl.gov wrote:
 Yes, ml2 was created since each of the drivers used to be required to do
 everything themselves and it was decided it would be far better for everyone
 to share the common bits. Thats what ml2s about. Its not about implementing
 an sdn


+1. I totally agree that we should keep the common bits like ML2,
however this is not the same as what the reference implementation
splitting spec suggested. The intention of spec is no common
implementation bits any more, leaving only API/DB.


 Thanks,
 Kevin

 
 From: loy wolfe
 Sent: Tuesday, April 28, 2015 6:16:03 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron] [Nova] [Cinder] [tc] Should Openstack
 project maintained by core team keep only API/DB in the future?

 On Wed, Apr 29, 2015 at 2:59 AM, Kevin Benton blak...@gmail.com wrote:
 The concern is that having broken drivers out there that claim to work
 with
 an OpenStack project end up making the project look bad. It's similar to a
 first time Linux user experiencing frequent kernel panics because they are
 using hardware with terrible drivers. They aren't going to recognize the
 distinction and will just assume the project is bad.


 I think the focal point is not about device driver for the real
 backend such as OVS/LB or HW TOR, but ML2 vs. external SDN controllers
 which are also claimed to be backends by some people.

 Again analogy with Linux, which has socket layer exposing the API,
 common tcp/ip stack and common netdev  skbuff, and each NIC has its
 own device driver (real backend). While it make sense to discuss
 whether those backend device driver should be splitted out of tree,
 there was no consideration that the common middle stacks should be
 splitted for equal footing with some other external implementations.

 Things are slimiar with Nova  Cinder. we may have all kinds of virt
 driver and volume driver, but only one common scheduling 
 compute/volume manager implementation. For Neutron it is necessary to
 support hundreds of real backends, but does it really benefit
 customers to equal footing the ML2 with a bunch of other external SDN
 controllers?

 Best Regards



I would love to see OpenStack upstream acting more like a resource to
 support users and developers

 I'm not sure what you mean here. The purpose of 3rd party CI requirements
 is
 to signal stability to users and to provide feedback to the developers.

 On Tue, Apr 28, 2015 at 4:18 AM, Luke Gorrie l...@tail-f.com wrote:

 On 28 April 2015 at 10:14, Duncan Thomas duncan.tho...@gmail.com wrote:

 If we allow third party CI to fail and wait for vendors to fix their
 stuff, experience has shown that they won't, and there'll be broken or
 barely functional drivers out there, and no easy way for the community
 to
 exert pressure to fix them up.


 Can't the user community exert pressure on the driver developers directly
 by talking to them, or indirectly by not using their drivers? How come
 OpenStack upstream wants to tell the developers what is needed before the
 users get a chance to take a look?

 I would love to see OpenStack upstream acting more like a resource to
 support users and developers (e.g. providing 3rd party CI hooks upon
 requst)
 and less like gatekeepers with big sticks to wave at people who don't
 drop
 their own priorities and Follow The Process.





 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [Nova] [Cinder] [tc] Should Openstack project maintained by core team keep only API/DB in the future?

2015-04-28 Thread loy wolfe
On Wed, Apr 29, 2015 at 2:59 AM, Kevin Benton blak...@gmail.com wrote:
 The concern is that having broken drivers out there that claim to work with
 an OpenStack project end up making the project look bad. It's similar to a
 first time Linux user experiencing frequent kernel panics because they are
 using hardware with terrible drivers. They aren't going to recognize the
 distinction and will just assume the project is bad.


I think the focal point is not about device driver for the real
backend such as OVS/LB or HW TOR, but ML2 vs. external SDN controllers
which are also claimed to be backends by some people.

Again analogy with Linux, which has socket layer exposing the API,
common tcp/ip stack and common netdev  skbuff, and each NIC has its
own device driver (real backend). While it make sense to discuss
whether those backend device driver should be splitted out of tree,
there was no consideration that the common middle stacks should be
splitted for equal footing with some other external implementations.

Things are slimiar with Nova  Cinder. we may have all kinds of virt
driver and volume driver, but only one common scheduling 
compute/volume manager implementation. For Neutron it is necessary to
support hundreds of real backends, but does it really benefit
customers to equal footing the ML2 with a bunch of other external SDN
controllers?

Best Regards



I would love to see OpenStack upstream acting more like a resource to
 support users and developers

 I'm not sure what you mean here. The purpose of 3rd party CI requirements is
 to signal stability to users and to provide feedback to the developers.

 On Tue, Apr 28, 2015 at 4:18 AM, Luke Gorrie l...@tail-f.com wrote:

 On 28 April 2015 at 10:14, Duncan Thomas duncan.tho...@gmail.com wrote:

 If we allow third party CI to fail and wait for vendors to fix their
 stuff, experience has shown that they won't, and there'll be broken or
 barely functional drivers out there, and no easy way for the community to
 exert pressure to fix them up.


 Can't the user community exert pressure on the driver developers directly
 by talking to them, or indirectly by not using their drivers? How come
 OpenStack upstream wants to tell the developers what is needed before the
 users get a chance to take a look?

 I would love to see OpenStack upstream acting more like a resource to
 support users and developers (e.g. providing 3rd party CI hooks upon requst)
 and less like gatekeepers with big sticks to wave at people who don't drop
 their own priorities and Follow The Process.




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [Nova] [Cinder] [tc] Should Openstack project maintained by core team keep only API/DB in the future?

2015-04-28 Thread loy wolfe
On Fri, Apr 24, 2015 at 9:46 PM, Salvatore Orlando sorla...@nicira.com wrote:


 On 24 April 2015 at 15:13, Kyle Mestery mest...@mestery.com wrote:

 On Fri, Apr 24, 2015 at 4:06 AM, loy wolfe loywo...@gmail.com wrote:

 It's already away from the original thread, so I start this new one,
 also with some extra tag because I think it touch some corss-project
 area.

 Original discuss and reference:
 http://lists.openstack.org/pipermail/openstack-dev/2015-April/062384.html

 https://review.openstack.org/#/c/176501/1/specs/liberty/reference-split.rst

 Background summary:
 All in-tree implementation would be splitted from Openstack
 networking, leaving Neutron as a naked API/DB platform, with a list
 of out-tree implementation git repos, which are not maintained by core
 team any more, but may be given a nominal big tent under the
 Openstack umbrella.


 I'm not sure what led you to this discussion, but it's patently incorrect.
 We're going to split the in-tree reference implementation into a separate
 git repository. I have not said anything about the current core revewier
 team not being responsible for that. It's natural to evolve to a core
 reviewer team which cares deeply about that, vs. those who care deeply about
 the DB/API layer. This is exactly what happened when we split out the
 advanced services.


 This discussion seems quite similar to that we had about non-reference
 plugins.
 Following the linux analogy you mention below Neutron should have been
 deprived of its plugins and drivers. And indeed, regardless of what it
 seems, it hasn't. Any user can still grab drivers as before. They just
 reside in different repos. This is not different, imho, from the concept of
 maintainers that linux has.
 Besides you make it look at like as if the management layer (API/DB) is just
 a tiny insignificant piece of software. I disagree quite strongly here, but
 perhaps it's just me seeing in Neutron's mgmt layer something more than what
 is actually is.




 Motivation: a) Smaller core team only focus on the in-tree API/DB
 definition, released from concrete controlling function
 implementation; b) If there is official implementation inside Neutron,
 3rd external SDN controller would face the competition.


 Perhaps point (b) is a bit unclear. Are you stating that having this control
 plane in Neutron gives it a better placement compared with other
 solutions?


The words are just from the reference split spec :)




 I'm not sure whether it's exactly what cloud operators want the
 Openstack to deliver. Do they want a off-the-shelf package, or just a
 framework and have to take the responsibility of integrating with
 other external controlling projects? A analogy with Linux that only
 kernel without any device driver has no use at all.


 We're still going to deliver ML2+OVS/LB+[DHCP, L3, metadata] agents for
 Liberty. I'm not sure where your incorrect assumption on what we're going to
 deliver is coming from.


 I would answer with a different analogy - nova. Consider the various agents
 as if it were libvirt. Like libvirt is a component which you use to control
 your hypervisor, the agents control the data plane (OVS and utilities like
 iptables/conntrack/dnsmasq/etc). With this analogy I believe Neutron's
 reference control plane deserves to live on its own, just like nobody
 would ever think that a libvirt implementation within nova is something
 sane,
 However, ML2 is a different beast. It has inside management and control
 logic, we'll need a good surgeon there. Pretty sure our refactoring fans are
 already drooling at the thought of cutting apart another component.




 There are already many debates about nova-network to Neutron parity.
 If largely used OVS and LB driver is out of tree and has to be
 integrated separately by customers, how do those they migrate from
 nova network? Standalone SDN controller has steep learning curve, and
 a lot of users don't care which one is better of ODL vs. OpenContrail
 to be integrated, they just want Openstack package easy to go by
 default in tree implementation,  and are ready to drive all kinds of
 opensource or commercial backends.


 I'm not sure what you mean here. In your opinions do operator want something
 that works and provides everything out of the box, and want something which
 is able to driver open source and commercial backends.
 And besides I do not see the complication from operators arising from this
 proposal. It's not like they have to maintain another component - indeed
 from an operator perspective l3 agents, dhcp agents, and so on are already
 different components to maintain (and that's one of the pain points they
 feel in using neutron)


At least for me and most of out customers, deployment and maintenance
of openstack and ODL together with its SDN apps had taken much more
effort than native Neutron ML2+agents. However existing agents need to
evolve as you mentioned, e.g. modular combined agents





 Do you realize that ML2 is plus

Re: [openstack-dev] [Neutron] [Nova] [Cinder] [tc] Should Openstack project maintained by core team keep only API/DB in the future?

2015-04-27 Thread loy wolfe
On Fri, Apr 24, 2015 at 9:13 PM, Kyle Mestery mest...@mestery.com wrote:
 On Fri, Apr 24, 2015 at 4:06 AM, loy wolfe loywo...@gmail.com wrote:

 It's already away from the original thread, so I start this new one,
 also with some extra tag because I think it touch some corss-project
 area.

 Original discuss and reference:
 http://lists.openstack.org/pipermail/openstack-dev/2015-April/062384.html

 https://review.openstack.org/#/c/176501/1/specs/liberty/reference-split.rst

 Background summary:
 All in-tree implementation would be splitted from Openstack
 networking, leaving Neutron as a naked API/DB platform, with a list
 of out-tree implementation git repos, which are not maintained by core
 team any more, but may be given a nominal big tent under the
 Openstack umbrella.


 I'm not sure what led you to this discussion, but it's patently incorrect.
 We're going to split the in-tree reference implementation into a separate
 git repository. I have not said anything about the current core revewier
 team not being responsible for that. It's natural to evolve to a core
 reviewer team which cares deeply about that, vs. those who care deeply about
 the DB/API layer. This is exactly what happened when we split out the
 advanced services.

Thanks for the simple explanation Kyle.

But today Neutron is already composed with many separate sub-teams:
ML2, L3, VPN/LBaas/Fw, etc, while each sub-team is responsible for
their own API/DB definition along with their implementations. So
what's the goal of the upcoming split: other standalone L2/L3-Core
API/DB teams, together with existing ML2 and L3 plugin/agent
implementation teams? Should we also split advanced service team with
API/DB and implementation team, and do they also need equal footing on
external 3rd SDN controllers?

It's not important whether a project team is nominally one of
openstack, under the big tent/stadium of Neutron as discussed in the
weekly meeting. Positioning is the key: Are existing built-in
ML2+OVS/LB SDN solution only used for concept proving in the future,
or continuously ensured as the native delivery ready for production
deployment? If a dedicated API/DB team has to co-ordinate so many
external 3rd SDN controllers besides the native built-in SDN, how can
it evolve with rapid feature growing?

Best Regards.



 Motivation: a) Smaller core team only focus on the in-tree API/DB
 definition, released from concrete controlling function
 implementation; b) If there is official implementation inside Neutron,
 3rd external SDN controller would face the competition.

 I'm not sure whether it's exactly what cloud operators want the
 Openstack to deliver. Do they want a off-the-shelf package, or just a
 framework and have to take the responsibility of integrating with
 other external controlling projects? A analogy with Linux that only
 kernel without any device driver has no use at all.


 We're still going to deliver ML2+OVS/LB+[DHCP, L3, metadata] agents for
 Liberty. I'm not sure where your incorrect assumption on what we're going to
 deliver is coming from.


 There are already many debates about nova-network to Neutron parity.
 If largely used OVS and LB driver is out of tree and has to be
 integrated separately by customers, how do those they migrate from
 nova network? Standalone SDN controller has steep learning curve, and
 a lot of users don't care which one is better of ODL vs. OpenContrail
 to be integrated, they just want Openstack package easy to go by
 default in tree implementation,  and are ready to drive all kinds of
 opensource or commercial backends.


 Do you realize that ML2 is plus the L2 agent is an SDN controller already?


 BTW: +1 to henry and mathieu, that indeed Openstack is not responsible
 projects of switch/router/fw, but it should be responsible for
 scheduling, pooling, and driving of those backends, which is the same
 case with Nova/Cinder scheduler and compute/volume manager. These
 controlling functions shouldn't be classified as backends in Neutron
 and be splitted out of tree.



 Regards


 On Fri, Apr 24, 2015 at 2:37 AM, Kyle Mestery mest...@mestery.com wrote:
 
 
  On Thu, Apr 23, 2015 at 1:31 PM, Fox, Kevin M kevin@pnnl.gov
  wrote:
 
  Yeah. In the end, its what git repo the source for a given rpm you
  install
  comes from. Ops will not care that neutron-openvswitch-agent comes from
  repo
  foo.git instead of bar.git.
 
 
 
  That's really the tl;dr of the proposed split.
 
  Thanks,
  Kyle
 
 
  Thanks,
  Kevin
  
  From: Armando M. [arma...@gmail.com]
  Sent: Thursday, April 23, 2015 9:10 AM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [Neutron] A big tent home for Neutron
  backend
  code
 
 
  I agree with henry here.
  Armando, If we use your analogy with nova that doesn't build and
  deliver
  KVM, we can say that Neutron doesn't build or deliver OVS. It builds a
  driver and an agent which manage OVS, just

Re: [openstack-dev] [Neutron] [Nova] [Cinder] [tc] Should Openstack project maintained by core team keep only API/DB in the future?

2015-04-24 Thread loy wolfe
It's already away from the original thread, so I start this new one,
also with some extra tag because I think it touch some corss-project
area.

Original discuss and reference:
http://lists.openstack.org/pipermail/openstack-dev/2015-April/062384.html
https://review.openstack.org/#/c/176501/1/specs/liberty/reference-split.rst

Background summary:
All in-tree implementation would be splitted from Openstack
networking, leaving Neutron as a naked API/DB platform, with a list
of out-tree implementation git repos, which are not maintained by core
team any more, but may be given a nominal big tent under the
Openstack umbrella.

Motivation: a) Smaller core team only focus on the in-tree API/DB
definition, released from concrete controlling function
implementation; b) If there is official implementation inside Neutron,
3rd external SDN controller would face the competition.

I'm not sure whether it's exactly what cloud operators want the
Openstack to deliver. Do they want a off-the-shelf package, or just a
framework and have to take the responsibility of integrating with
other external controlling projects? A analogy with Linux that only
kernel without any device driver has no use at all.

There are already many debates about nova-network to Neutron parity.
If largely used OVS and LB driver is out of tree and has to be
integrated separately by customers, how do those they migrate from
nova network? Standalone SDN controller has steep learning curve, and
a lot of users don't care which one is better of ODL vs. OpenContrail
to be integrated, they just want Openstack package easy to go by
default in tree implementation,  and are ready to drive all kinds of
opensource or commercial backends.

BTW: +1 to henry and mathieu, that indeed Openstack is not responsible
projects of switch/router/fw, but it should be responsible for
scheduling, pooling, and driving of those backends, which is the same
case with Nova/Cinder scheduler and compute/volume manager. These
controlling functions shouldn't be classified as backends in Neutron
and be splitted out of tree.

Regards


On Fri, Apr 24, 2015 at 2:37 AM, Kyle Mestery mest...@mestery.com wrote:


 On Thu, Apr 23, 2015 at 1:31 PM, Fox, Kevin M kevin@pnnl.gov wrote:

 Yeah. In the end, its what git repo the source for a given rpm you install
 comes from. Ops will not care that neutron-openvswitch-agent comes from repo
 foo.git instead of bar.git.



 That's really the tl;dr of the proposed split.

 Thanks,
 Kyle


 Thanks,
 Kevin
 
 From: Armando M. [arma...@gmail.com]
 Sent: Thursday, April 23, 2015 9:10 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron] A big tent home for Neutron backend
 code


 I agree with henry here.
 Armando, If we use your analogy with nova that doesn't build and deliver
 KVM, we can say that Neutron doesn't build or deliver OVS. It builds a
 driver and an agent which manage OVS, just like nova which provides a driver
 to manage libvirt/KVM.
 Moreover, external SDN controllers are much more complex than Neutron
 with its reference drivers. I feel like forcing the cloud admin to deploy
 and maintain an external SDN controller would be a terrible experience for
 him if he just needs a simple way manage connectivity between VMs.
 At the end of the day, it might be detrimental for the neutron project.



 I don't think that anyone is saying that cloud admins are going to be
 forced to deploy and maintain an external SDN controller. There are plenty
 of deployment examples where people are just happy with network
 virtualization the way Neutron has been providing for years and we should
 not regress on that. To me it's mostly a matter of responsibilities of who
 develops what, and what that what is :)

 The consumption model is totally a different matter.


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] A big tent home for Neutron backend code

2015-04-22 Thread loy wolfe
On Thu, Apr 23, 2015 at 3:30 AM, Kyle Mestery mest...@mestery.com wrote:
 On Wed, Apr 22, 2015 at 1:19 PM, Russell Bryant rbry...@redhat.com wrote:

 Hello!

 A couple of things I've been working on lately are project governance
 issues as a TC member and also implementation of a new virtual
 networking alternative with a Neutron driver.  So, naturally I started
 thinking about how the Neutron driver code fits in to OpenStack
 governance.

 Thanks for starting this conversation Russell.


 There are basically two areas with a lot of movement related to this
 issue.

 1) Project governance has moved to a big tent model [1].  The vast
 majority of projects that used to be in Stackforge are being folded in
 to a larger definition of the OpenStack project.  Projects making this
 move meet the following criteria as being one of us:

 http://governance.openstack.org/reference/new-projects-requirements.html

 Official project teams are tracked in this file along with the git repos
 they are responsible for:


 http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml

 which is also reflected here:

 http://governance.openstack.org/reference/projects/

 The TC has also been working through defining a system to help
 differentiate efforts by using a set of tags [4].  So far, we have
 tags describing the release handling for a repository, as well as a tag
 for team diversity.  We've also had a lot of discussion about tags to
 help describe maturity, but that is still a work in progress.


 2) In Neutron, some fairly significant good changes are being made to
 help scale the development process.  Advanced services were split out
 into their own repos [2].  Most of the plugin and driver code has also
 been split out into repos [3].

 In terms of project teams, the Neutron team is defined as owning the
 following repos:

   http://governance.openstack.org/reference/projects/neutron.html

  - openstack/neutron
  - openstack/neutron-fwaas
  - openstack/neutron-lbaas
  - openstack/neutron-vpnaas
  - openstack/neutron-specs
  - openstack/python-neutronclient

 The advanced services split is reflected by the fwaas, lbaas, and vpnaas
 repos.

 We also have a large set of repositories related to Neutron backend code:

   http://git.openstack.org/cgit/?q=stackforge%2Fnetworking

  - stackforge/networking-arista
  - stackforge/networking-bagpipe-l2
  - stackforge/networking-bgpvpn
  - stackforge/networking-bigswitch
  - stackforge/networking-brocade
  - stackforge/networking-cisco
  - stackforge/networking-edge-vpn
  - stackforge/networking-hyperv
  - stackforge/networking-ibm
  - stackforge/networking-l2gw
  - stackforge/networking-midonet
  - stackforge/networking-mlnx
  - stackforge/networking-nec
  - stackforge/networking-odl
  - stackforge/networking-ofagent
  - stackforge/networking-ovn
  - stackforge/networking-ovs-dpdk
  - stackforge/networking-plumgrid
  - stackforge/networking-portforwarding
  - stackforge/networking-vsphere

 Note that not all of these are equivalent.  This is just a list of
 stackforge/networking-*.

 In some cases there is a split between code in the Neutron tree and in
 this repo.  In those cases, a shim is in the Neutron tree, but most of
 the code is in the external repo.  It's also possible to have all of the
 code in the external repo.

 There's also a big range of maturity.  Some are quite mature and are
 already used in production.  networking-ovn as an example is quite new
 and being developed in parallel with OVN in the Open vSwitch project.


 So, my question is: Where should these repositories live in terms of
 OpenStack governance and project teams?

 Here are a few paths I think we could take, along with some of my
 initial thoughts on pros/cons.

 a) Adopt these as repositories under the Neutron project team.

 In this case, I would see them operating with their own review teams as
 they do today to avoid imposing additional load on the neutron-core or
 neutron-specs-core teams.  However, by being a part of the Neutron team,
 the backend team would submit to oversight by the Neutron PTL.

 Out of your options proposed, this seems like the most logical one to me. I
 don't really see this imposing a ton of strain on the existing core reviewer
 team, because we'll keep whatever core reviewer teams are already in the
 networking-foo projects.


 There are some other details to work out to ensure expectations are
 clearly set for everyone involved.  If this is the path that makes
 sense, we can work through those as a next step.

 Pros:
  + Seems to be the most natural first choice

 Cons:
  - A lot of changes have been made precisely because Neutron has gotten
 so big.  A single project team/PTL may not be able to effectively
 coordinate all of the additional work.  Maybe the core Neutron project
 would be better off focusing on being a platform, and other project
 teams organize work on backends.

 It's interesting you mention neutron being a platform, because 

Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-25 Thread loy wolfe
Oh, what you mean is vertical splitting, while I'm talking about horizontal
splitting.

I'm a little confused about why Neutron is designed so differently with
Nova and Cinder. In fact MD could be very simple, delegating nearly all
things out to agent. Remember Cinder volume manager? The real storage
backend could also be deployed outside the server farm as the dedicated
hardware, not necessary the local host based resource. The agent could act
as the proxy to an outside module, instead of heavy burden on central
plugin servers, and also, all backend can inter-operate and co-exist
seamlessly (like a single vxlan across ovs and tor in hybrid deployment)


On Thu, Feb 26, 2015 at 9:39 AM, Kevin Benton blak...@gmail.com wrote:

 In the cases I'm referring to, OVS handles the security groups and
 vswitch.  The other drivers handle fabric configuration for VLAN tagging to
 the host and whatever other plumbing they want to do.
 On Feb 25, 2015 5:30 PM, loy wolfe loywo...@gmail.com wrote:



 On Thu, Feb 26, 2015 at 3:51 AM, Kevin Benton blak...@gmail.com wrote:

 The fact that a system doesn't use a neutron agent is not a good
 justification for monolithic vs driver. The VLAN drivers co-exist with OVS
 just fine when using VLAN encapsulation even though some are agent-less.

 so how about security group, and all other things which need coordination
 between vswitchs?


  There is a missing way to coordinate connectivity with tunnel networks
 across drivers, but that doesn't mean you can't run multiple drivers to
 handle different types or just to provide additional features (auditing,
 more access control, etc).
 On Feb 25, 2015 2:04 AM, loy wolfe loywo...@gmail.com wrote:

 +1 to separate monolithic OVN plugin

 The ML2 has been designed for co-existing of multiple heterogeneous
 backends, it works well for all agent solutions: OVS, Linux Bridge, and
 even ofagent.

 However, when things come with all kinds of agentless solutions,
 especially all kinds of SDN controller (except for Ryu-Lib style),
 Mechanism Driver became the new monolithic place despite the benefits of
 code reduction:  MDs can't inter-operate neither between themselves nor
 with ovs/bridge agent L2pop, each MD has its own exclusive vxlan
 mapping/broadcasting solution.

 So my suggestion is that keep those thin MD(with agent) in ML2
 framework (also inter-operate with native Neutron L3/service plugins),
 while all other fat MD(agentless) go with the old style of monolithic
 plugin, with all L2-L7 features tightly integrated.

 On Wed, Feb 25, 2015 at 9:25 AM, Amit Kumar Saha (amisaha) 
 amis...@cisco.com wrote:

  Hi,



 I am new to OpenStack (and am particularly interested in networking).
 I am getting a bit confused by this discussion. Aren’t there already a few
 monolithic plugins (that is what I could understand from reading the
 Networking chapter of the OpenStack Cloud Administrator Guide. Table 7.3
 Available networking plugi-ins)? So how do we have interoperability 
 between
 those (or do we not intend to)?



 BTW, it is funny that the acronym ML can also be used for “monolithic”
 J



 Regards,

 Amit Saha

 Cisco, Bangalore







 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-25 Thread loy wolfe
On Thu, Feb 26, 2015 at 3:51 AM, Kevin Benton blak...@gmail.com wrote:

 The fact that a system doesn't use a neutron agent is not a good
 justification for monolithic vs driver. The VLAN drivers co-exist with OVS
 just fine when using VLAN encapsulation even though some are agent-less.

so how about security group, and all other things which need coordination
between vswitchs?


  There is a missing way to coordinate connectivity with tunnel networks
 across drivers, but that doesn't mean you can't run multiple drivers to
 handle different types or just to provide additional features (auditing,
 more access control, etc).
 On Feb 25, 2015 2:04 AM, loy wolfe loywo...@gmail.com wrote:

 +1 to separate monolithic OVN plugin

 The ML2 has been designed for co-existing of multiple heterogeneous
 backends, it works well for all agent solutions: OVS, Linux Bridge, and
 even ofagent.

 However, when things come with all kinds of agentless solutions,
 especially all kinds of SDN controller (except for Ryu-Lib style),
 Mechanism Driver became the new monolithic place despite the benefits of
 code reduction:  MDs can't inter-operate neither between themselves nor
 with ovs/bridge agent L2pop, each MD has its own exclusive vxlan
 mapping/broadcasting solution.

 So my suggestion is that keep those thin MD(with agent) in ML2
 framework (also inter-operate with native Neutron L3/service plugins),
 while all other fat MD(agentless) go with the old style of monolithic
 plugin, with all L2-L7 features tightly integrated.

 On Wed, Feb 25, 2015 at 9:25 AM, Amit Kumar Saha (amisaha) 
 amis...@cisco.com wrote:

  Hi,



 I am new to OpenStack (and am particularly interested in networking). I
 am getting a bit confused by this discussion. Aren’t there already a few
 monolithic plugins (that is what I could understand from reading the
 Networking chapter of the OpenStack Cloud Administrator Guide. Table 7.3
 Available networking plugi-ins)? So how do we have interoperability between
 those (or do we not intend to)?



 BTW, it is funny that the acronym ML can also be used for “monolithic” J



 Regards,

 Amit Saha

 Cisco, Bangalore






 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-25 Thread loy wolfe
On Thu, Feb 26, 2015 at 10:50 AM, Kevin Benton blak...@gmail.com wrote:

 You can horizontally split as well (if I understand what axis definitions
 you are using). The Big Switch driver for example will bind ports that
 belong to hypervisors running IVS while leaving the OVS driver to bind
 ports attached to hypervisors running OVS.


That's just what I mean about horizontal, which is limited for some
features. For example, ports belonging to BSN driver and OVS driver can't
communicate with each other in the same tunnel network, neither does
security group across both sides.


  I don't fully understand your comments about  the architecture of
 neutron. Most work is delegated to either agents or a backend server.
 Basically every ML2 driver pushes the work via an agent notification or
 an HTTP call of some sort


Here is the key difference: thin MD such as ovs and bridge never push any
work to agent, which only handle port bind, just as a scheduler selecting
the backend vif type. Those agent notification is handled by other common
code in ML2, so thin MDs can seamlessly be integrated with each other
horizontally for all features, like tunnel l2pop. On the other hand fat MD
will push every work to backend through HTTP call, which partly block
horizontal inter-operation with other backends.

Then I'm thing about this pattern: ML2 /w thin MD - agent - HTTP call to
backend? Which should be much easier for horizontal inter-operate.


On Feb 25, 2015 6:15 PM, loy wolfe loywo...@gmail.com wrote:

 Oh, what you mean is vertical splitting, while I'm talking about
 horizontal splitting.

 I'm a little confused about why Neutron is designed so differently with
 Nova and Cinder. In fact MD could be very simple, delegating nearly all
 things out to agent. Remember Cinder volume manager? The real storage
 backend could also be deployed outside the server farm as the dedicated
 hardware, not necessary the local host based resource. The agent could act
 as the proxy to an outside module, instead of heavy burden on central
 plugin servers, and also, all backend can inter-operate and co-exist
 seamlessly (like a single vxlan across ovs and tor in hybrid deployment)


 On Thu, Feb 26, 2015 at 9:39 AM, Kevin Benton blak...@gmail.com wrote:

 In the cases I'm referring to, OVS handles the security groups and
 vswitch.  The other drivers handle fabric configuration for VLAN tagging to
 the host and whatever other plumbing they want to do.
 On Feb 25, 2015 5:30 PM, loy wolfe loywo...@gmail.com wrote:



 On Thu, Feb 26, 2015 at 3:51 AM, Kevin Benton blak...@gmail.com
 wrote:

 The fact that a system doesn't use a neutron agent is not a good
 justification for monolithic vs driver. The VLAN drivers co-exist with OVS
 just fine when using VLAN encapsulation even though some are agent-less.

 so how about security group, and all other things which need
 coordination between vswitchs?


  There is a missing way to coordinate connectivity with tunnel
 networks across drivers, but that doesn't mean you can't run multiple
 drivers to handle different types or just to provide additional features
 (auditing,  more access control, etc).
 On Feb 25, 2015 2:04 AM, loy wolfe loywo...@gmail.com wrote:

 +1 to separate monolithic OVN plugin

 The ML2 has been designed for co-existing of multiple heterogeneous
 backends, it works well for all agent solutions: OVS, Linux Bridge, and
 even ofagent.

 However, when things come with all kinds of agentless solutions,
 especially all kinds of SDN controller (except for Ryu-Lib style),
 Mechanism Driver became the new monolithic place despite the benefits of
 code reduction:  MDs can't inter-operate neither between themselves nor
 with ovs/bridge agent L2pop, each MD has its own exclusive vxlan
 mapping/broadcasting solution.

 So my suggestion is that keep those thin MD(with agent) in ML2
 framework (also inter-operate with native Neutron L3/service plugins),
 while all other fat MD(agentless) go with the old style of monolithic
 plugin, with all L2-L7 features tightly integrated.

 On Wed, Feb 25, 2015 at 9:25 AM, Amit Kumar Saha (amisaha) 
 amis...@cisco.com wrote:

  Hi,



 I am new to OpenStack (and am particularly interested in
 networking). I am getting a bit confused by this discussion. Aren’t 
 there
 already a few monolithic plugins (that is what I could understand from
 reading the Networking chapter of the OpenStack Cloud Administrator 
 Guide.
 Table 7.3 Available networking plugi-ins)? So how do we have
 interoperability between those (or do we not intend to)?



 BTW, it is funny that the acronym ML can also be used for
 “monolithic” J



 Regards,

 Amit Saha

 Cisco, Bangalore







 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-25 Thread loy wolfe
+1 to separate monolithic OVN plugin

The ML2 has been designed for co-existing of multiple heterogeneous
backends, it works well for all agent solutions: OVS, Linux Bridge, and
even ofagent.

However, when things come with all kinds of agentless solutions, especially
all kinds of SDN controller (except for Ryu-Lib style), Mechanism Driver
became the new monolithic place despite the benefits of code reduction:
 MDs can't inter-operate neither between themselves nor with ovs/bridge
agent L2pop, each MD has its own exclusive vxlan mapping/broadcasting
solution.

So my suggestion is that keep those thin MD(with agent) in ML2 framework
(also inter-operate with native Neutron L3/service plugins), while all
other fat MD(agentless) go with the old style of monolithic plugin, with
all L2-L7 features tightly integrated.

On Wed, Feb 25, 2015 at 9:25 AM, Amit Kumar Saha (amisaha) 
amis...@cisco.com wrote:

  Hi,



 I am new to OpenStack (and am particularly interested in networking). I am
 getting a bit confused by this discussion. Aren’t there already a few
 monolithic plugins (that is what I could understand from reading the
 Networking chapter of the OpenStack Cloud Administrator Guide. Table 7.3
 Available networking plugi-ins)? So how do we have interoperability between
 those (or do we not intend to)?



 BTW, it is funny that the acronym ML can also be used for “monolithic” J



 Regards,

 Amit Saha

 Cisco, Bangalore





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Core/Vendor code decomposition

2014-12-09 Thread loy wolfe
Remove everything out of tree, and leave only Neutron API framework as
integration platform, would lower the attractions of the whole
Openstack Project. Without a default good enough reference backend
from community, customers have to depends on packagers to fully test
all backends for them. Can we image nova without kvm, glance without
swift? Cinder is weak because of default lvm backend, if in the future
Ceph became the default it would be much better.

If the goal of this decomposition is eventually moving default
reference driver out, and the in-tree OVS backend is an eyesore, then
it's better to split the Neutron core with base repo and vendor repo.
They only share common base API/DB model, each vendor can extend their
API, DB model freely, using a shim proxy to delegate all the service
logic to their backend controller. They can choose to keep out of
tree, or in tree (vendor repo) with the previous policy that
contribute code reviewing for their code being reviewed by other
vendors.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][opendaylight]OpenDaylight Neutron Plugin design session etherpad

2014-11-03 Thread loy wolfe
A little confusion on this etherpad topic: Neutron is not a
controller! Can OpenDaylight become The Controller?

Is this the common consensus of the Whole Neutron community, and what
does the word controller mean here?

If Neutron controller nodes are not doing anything related with
controlling, then what about the default built-in implementation of
L2population in the ML2 plugin, and DVR in the L3 plugin, and the
upcoming features like dynamic routing, BGP VPN, and so on?

By my thought, ODL is a standalone controller separated from Neutron,
and could co-exist with Neutron if customers choose to do so. But
Neutron built-in implementation in the plugins is already a sort of
*Controller*, there are no logical function difference in the natural
positioning between them.

What is the standpoint of Neutron community: is the built-in
implementation just a reference for POC (waiting some future 3rd
Controller like ODL for commercialized deployment), or aimed at large
scale production deployment by itself? Who will be the first citizen?

On Mon, Nov 3, 2014 at 7:15 PM, Carl Baldwin c...@ecbaldwin.net wrote:
 https://etherpad.openstack.org/p/odl-neutron-plugin

 On Mon, Nov 3, 2014 at 4:05 AM, Richard Woo richardwoo2...@gmail.com wrote:
 Hi, what is etherpad link for opendaylight neutron plugin design session?

 http://kilodesignsummit.sched.org/event/5a430f46842e9239ea6c29a69cbe4e84#.VFdhdPTF-0E

 Thanks,

 Richard

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] what is the different between ovs-ofctl and iptalbes? Can we use ovs-ofctl to nat floating ip into fixed ip if we use openvswitch agent?

2014-11-03 Thread loy wolfe
maybe two reasons: performance caused by flow miss; feature parity

L3+ flow table destroy the megaflow aggregation, so if your app has
many concurrent sessions like web server, flow miss upcall would make
vswitchd corrupted.

iptable is already there, migrating it to ovs flow table needs a lot
of extra development, not to say that some advanced features is lost
(for example, stateful firewall). However ovs is considering to add
some hook to iptable, but in the very early stage yet. Even with that,
it is not implemented by ovs datapath flowtable, but by iptable.

On Tue, Nov 4, 2014 at 1:07 PM, Li Tianqing jaze...@163.com wrote:
 ovs is implemented open flow, in ovs, it can see the l3, why do not use ovs?

 --
 Best
 Li Tianqing

 At 2014-11-04 11:55:46, Damon Wang damon.dev...@gmail.com wrote:

 Hi,

 OVS mainly focus on l2 which iptables mainly focus on l3 or higher.

 Damon Wang

 2014-11-04 11:12 GMT+08:00 Li Tianqing jaze...@163.com:






 --
 Best
 Li Tianqing



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Cells conversation starter

2014-10-22 Thread loy wolfe
I suspect whether cell itself could ease large scale deployment across
multiple DC. Although openstack is modularized as separated projects,
but top level total solution across projects is needed when
considering issues such as scaling, especially coordination between
nova and network / metering.

Personally +1 to Steve, that in terms of future plans decision for
scaling, we should explore all similar proposals such as Cascading and
Federation. In fact networking and metering face the same challenge as
nova, or even more with l2pop and dvr. Those two proposals both bring
total solution across projects besides nova, with each has its own
emphasis by the view of intra vs. inter admin domain.


On Thu, Oct 23, 2014 at 8:11 AM, Sam Morrison sorri...@gmail.com wrote:

 On 23 Oct 2014, at 5:55 am, Andrew Laski andrew.la...@rackspace.com wrote:

 While I agree that N is a bit interesting, I have seen N=3 in production

 [central API]--[state/region1]--[state/region DC1]
\-[state/region DC2]
   --[state/region2 DC]
   --[state/region3 DC]
   --[state/region4 DC]

 I would be curious to hear any information about how this is working out.  
 Does everything that works for N=2 work when N=3?  Are there fixes that 
 needed to be added to make this work?  Why do it this way rather than bring 
 [state/region DC1] and [state/region DC2] up a level?

 We (NeCTAR) have 3 tiers, our current setup has one parent, 6 children then 3 
 of the children have 2 grandchildren each. All compute nodes are at the 
 lowest level.

 Everything works fine and we haven’t needed to do any modifications.

 We run in a 3 tier system because it matches how our infrastructure is 
 logically laid out, but I don’t see a problem in just having a 2 tier system 
 and getting rid of the middle man.

 Sam


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-01 Thread loy wolfe
Hi Joe and Cellers,

I've tried to understand relationship between Cell and Cascading. If
Cell has been designed as below, would it be the same as Cascading?

1) Besides Nova, Neutron/Ceilometer.. is also hierarchically
structured for scalability.

2) Child-parent interaction is based on REST OS-API, but not internal
rpc message.

By my understanding, core idea of Cascading is that each resource
building block(like child cell) is a clearly separated autonomous
system, with the already defined REST OS-API as the NB integration
interface of each block, is that right?

So, what's the OAM and business value? Is it easy to add a building
block POD into the running production cloud, while this POD is from a
different Openstack packager and has its own deployment choice:
Openstack version release(J/K/L...), MQ/DB type(mysql/pg,
rabbitmq/zeromq..), backend drivers, Nova/Neutron/Cinder/Ceilometer
controller-node / api-server config options...?

Best Regards
Loy


On Wed, Oct 1, 2014 at 3:19 PM, Tom Fifield t...@openstack.org wrote:

 Hi Joe,

 On 01/10/14 09:10, joehuang wrote:
  OpenStack cascading: to integrate multi-site / multi-vendor OpenStack 
  instances into one cloud with OpenStack API exposed.
  Cells: a single OpenStack instance scale up methodology

 Just to let you know - there are actually some users out there that use
 cells to integrate multi-site / multi-vendor OpenStack instances into
 one cloud with OpenStack API exposed., and this is their main reason
 for using cells - not as a scale up methodology.


 Regards,

 Tom

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Status of Neutron at Juno-3

2014-09-08 Thread loy wolfe
Is TOR based vxlan really a common interest to most of the people, and
should a BP be taken care now with its designed use case of 3rd TOR backend
integration?


On Fri, Sep 5, 2014 at 10:54 PM, Robert Kukura kuk...@noironetworks.com
wrote:

 Kyle,

 Please consider an FFE for https://blueprints.launchpad.
 net/neutron/+spec/ml2-hierarchical-port-binding. This was discussed
 extensively at Wednesday's ML2 meeting, where the consensus was that it
 would be valuable to get this into Juno if possible. The patches have had
 core reviews from Armando, Akihiro, and yourself. Updates to the three
 patches addressing the remaining review issues will be posted today, along
 with an update to the spec to bring it in line with the implementation.

 -Bob


 On 9/3/14, 8:17 AM, Kyle Mestery wrote:

 Given how deep the merge queue is (146 currently), we've effectively
 reached feature freeze in Neutron now (likely other projects as well).
 So this morning I'm going to go through and remove BPs from Juno which
 did not make the merge window. I'll also be putting temporary -2s in
 the patches to ensure they don't slip in as well. I'm looking at FFEs
 for the high priority items which are close but didn't quite make it:

 https://blueprints.launchpad.net/neutron/+spec/l3-high-availability
 https://blueprints.launchpad.net/neutron/+spec/add-ipset-to-security
 https://blueprints.launchpad.net/neutron/+spec/security-
 group-rules-for-devices-rpc-call-refactor

 Thanks,
 Kyle

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][NFV] VIF_VHOSTUSER

2014-09-01 Thread loy wolfe
If the neutron side MD is just for snabbswitch, then I thinks there is no
change to be merged into the tree. Maybe we can learn from sriov nic,
although backend is vendor specific, but the MD is generic, can support
snabb, dpdkovs, ans other userspace vswitch, etc.

As the reference implementation CI, the snabb can work in a very simple
mode without need for agent (just as agentless sriov veb nic)


On Sun, Aug 31, 2014 at 9:36 PM, Itzik Brown itz...@dev.mellanox.co.il
wrote:


 On 8/30/2014 11:22 PM, Ian Wells wrote:

 The problem here is that you've removed the vif_driver option and now
 you're preventing the inclusion of named VIF types into the generic driver,
 which means that rather than adding a package to an installation to add
 support for a VIF driver it's now necessary to change the Nova code (and
 repackage it, or - ew - patch it in place after installation).  I
 understand where you're coming from but unfortunately the two changes
 together make things very awkward.  Granted that vif_driver needed to go
 away - it was the wrong level of code and the actual value was coming from
 the wrong place anyway (nova config and not Neutron) - but it's been
 removed without a suitable substitute.

 It's a little late for a feature for Juno, but I think we need to write
 something discovers VIF types installed on the system.  That way you can
 add a new VIF type to Nova by deploying a package (and perhaps naming it in
 config as an available selection to offer to Neutron) *without* changing
 the Nova tree itself.


 In the meantime, I recommend you consult with the Neutron cores and see
 if you can make an exception for the VHOSTUSER driver for the current
 timescale.
 --
 Ian.

  I Agree with Ian.
 My understanding from a conversation a month ago was that there would be
 an alternative to the deprecated config option.
 As far as I understand now there is no such alternative in Juno and in
 case that one has an out of the tree VIF Driver he'll be left out with a
 broken solution.
 What do you say about an option of reverting the change?
 Anyway It might be a good idea to discuss propositions to address this
 issue towards Kilo summit.

 BR,
 Itzik


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] Mech driver as out-of-tree add-on

2014-08-27 Thread loy wolfe
On Wed, Aug 27, 2014 at 3:13 PM, Kevin Benton blak...@gmail.com wrote:

 Ports are bound in order of configured drivers so as long as the
 OpenVswitch driver is put first in the list, it will bind the ports it can
 and then ODL would bind the leftovers. [1][2] The only missing component is
 that ODL doesn't look like it uses l2pop so establishing tunnels between
 the OVS agents and the ODL-managed vswitches would be an issue that would
 have to be handled via another process.

 Regardless, my original point is that the driver keeps the neutron
 semantics and DB in tact. In my opinion, the lack of compatibility with
 l2pop isn't an issue with the driver, but more of an issue with how l2pop
 was designed. It's very tightly coupled to having agents managed by Neutron
 via RPC, which shouldn't be necessary when it's primary purpose is to
 establish endpoints for overlay tunnels.


So why not agent based? Neutron shouldn't be treated as just an resource
storage, built-in backends naturally need things like l2pop and dvr for
distributed dynamic topology control,  we couldn't say that something as a
part was tightly coupled.

On the contrary, 3rd backends should adapt themselves to be integrated into
Neutron as thin as they can, focusing on the backend device control but not
re-implement core service logic duplicated with Neutron . BTW, Ofagent is a
good example for this style.




 1.
 https://github.com/openstack/neutron/blob/7f466c8730cfca13f2fb374c80d810929bb8/neutron/plugins/ml2/drivers/mech_agent.py#L53
 2.
 https://github.com/openstack/neutron/blob/7f466c8730cfca13f2fb374c80d810929bb8/neutron/plugins/ml2/drivers/mechanism_odl.py#L326


 On Tue, Aug 26, 2014 at 10:05 PM, loy wolfe loywo...@gmail.com wrote:




 On Wed, Aug 27, 2014 at 12:42 PM, Kevin Benton blak...@gmail.com wrote:

 I think that opensource is not the only factor, it's about built-in
 vs. 3rd backend. Built-in must be opensource, but opensource is not
 necessarily built-in. By my thought, current OVS and linuxbridge are
 built-in, but shim RESTful proxy for all kinds of sdn controller should be
 3rd, for they keep all virtual networking data model and service logic in
 their own places, using Neutron API just as the NB shell (they can't even
 co-work with built-in l2pop driver for vxlan/gre network type today).


 I understand the point you are trying to make, but this blanket
 statement about the data model of drivers/plugins with REST backends is
 wrong. Look at the ODL mechanism driver for a counter-example.[1] The data
 is still stored in Neutron and all of the semantics of the API are
 maintained. The l2pop driver is to deal with decentralized overlays, so I'm
 not sure how its interoperability with the ODL driver is relevant.


 If we create a vxlan network,  then can we bind some ports to built-in
 ovs driver, and other ports to ODL driver? linux bridge agnet, ovs agent,
 ofagent can co-exist in the same vxlan network, under the common l2pop
 mechanism. By that scenery, I'm not sure whether ODL can just add to them
 in a heterogeneous multi-backend architecture , or work exclusively and
 have to take over all the functionality.



 1.
 https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/mechanism_odl.py



 On Tue, Aug 26, 2014 at 7:14 PM, loy wolfe loywo...@gmail.com wrote:

 Forwarded from other thread discussing about incubator:

 http://lists.openstack.org/pipermail/openstack-dev/2014-August/044135.html



 Completely agree with this sentiment. Is there a crisp distinction
 between a vendor plugin and an open source plugin though?


 I think that opensource is not the only factor, it's about built-in
 vs. 3rd backend. Built-in must be opensource, but opensource is not
 necessarily built-in. By my thought, current OVS and linuxbridge are
 built-in, but shim RESTful proxy for all kinds of sdn controller should be
 3rd, for they keep all virtual networking data model and service logic in
 their own places, using Neutron API just as the NB shell (they can't even
 co-work with built-in l2pop driver for vxlan/gre network type today).

 As for the Snabb or DPDKOVS (they also plan to support official qemu
 vhost-user), or some other similar contributions, if one or two of them win
 in the war of this high performance userspace vswitch, and receive large
 common interest, then it may be accepted as built-in.



 The Snabb NFV (http://snabb.co/nfv.html) driver superficially looks
 like a vendor plugin but is actually completely open source. The
 development is driven by end-user organisations who want to make the
 standard upstream Neutron support their NFV use cases.

 We are looking for a good way to engage with the upstream community.
 In this cycle we have found kindred spirits in the NFV subteam., but we 
 did
 not find a good way to engage with Neutron upstream (see
 https://review.openstack.org/#/c/116476/). It would be wonderful if
 there is a suitable process available for us to use in Kilo e.g

Re: [openstack-dev] [infra] [neutron] [tc] Neutron Incubator workflow

2014-08-27 Thread loy wolfe
On Wed, Aug 27, 2014 at 2:44 PM, Kevin Benton blak...@gmail.com wrote:

 Incubator doesn't mean being kicked out of tree, it just mean that the
 API and resource model needs to be baked for fast iteration, and can't be
 put in tree temporarily.

 That was exactly my point about developing a major feature like DVR. Even
 with a limited API change (the new distributed flag), it has an impact on
 the the router/agent DB resource model and currently isn't compatible
 with VLAN based ML2 deployments. It's not exactly a hidden optimization
 like an improvement to some RPC handling code.


Flag is only for admin use, tenant can't see it, and the default policy for
router is setup by config file.

It COULD be compatible for DVR work on vlan, but there were some bugs
several months before, that DVR mac are not written successfully on the
egress packet, I'm not sure if it is fixed in the merged code.


 A huge piece of the DVR development had to happen in Neutron forks and 40+
 revision chains of Gerrit patches. It was very difficult to follow without
 being heavily involved with the L3 team. This would have been a
 great candidate to develop in the incubator if it existed at the time. It
 would have been easier to try as it was developed and to explore the entire
 codebase. Also, more people could have been contributing bug fixes and
 improvements since an entire section of code wouldn't be 'owned' by one
 person like it is with the author of a Gerrit review.

 For DVR, as it has no influence on tenant facing API resource model, it
 works as the built-in backend, and this feature has accepted wide common
 interests,

 As was pointed out before, common interest has nothing to do with
 incubation. Incubation is to rapidly iterate on a new feature for Neutron.
 It shouldn't be restricted to API changes, it should be used for any major
 new features that are possible to develop outside of the Neutron core. If
 we are going to have this new incubator tool, we should use it to the
 fullest extent possible.



 On Tue, Aug 26, 2014 at 6:19 PM, loy wolfe loywo...@gmail.com wrote:

 Incubator doesn't mean being kicked out of tree, it just mean that the
 API and resource model needs to be baked for fast iteration, and can't be
 put in tree temporarily. As kyle has said, incubator is not talking about
 moving 3rd drivers out of tree, which is in another thread.

 For DVR, as it has no influence on tenant facing API resource model, it
 works as the built-in backend, and this feature has accepted wide common
 interests, it's just the internal performance optimization tightly coupled
 with existing code, so it should be developed in tree.


 On Wed, Aug 27, 2014 at 8:08 AM, Kevin Benton blak...@gmail.com wrote:

 From what I understand, the intended projects for the incubator can't
 operate without neutron because they are just extensions/plugins/drivers.

 For example, if the DVR modifications to the reference reference L3
 plugin weren't already being developed in the tree, DVR could have been
 developed in the incubator and then merged into Neutron once the bugs were
 ironed out so a huge string of Gerrit patches didn't need to be tracked. If
 that had happened, would it make sense to keep the L3 plugin as a
 completely separate project or merge it? I understand this is the approach
 the load balancer folks took by making Octavia a separate project, but I
 think it can still operate on its own, where the reference L3 plugin (and
 many of the other incubator projects) are just classes that expect to be
 able to make core Neutron calls.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [neutron] [tc] Neutron Incubator workflow

2014-08-26 Thread loy wolfe
Incubator doesn't mean being kicked out of tree, it just mean that the API
and resource model needs to be baked for fast iteration, and can't be put
in tree temporarily. As kyle has said, incubator is not talking about
moving 3rd drivers out of tree, which is in another thread.

For DVR, as it has no influence on tenant facing API resource model, it
works as the built-in backend, and this feature has accepted wide common
interests, it's just the internal performance optimization tightly coupled
with existing code, so it should be developed in tree.


On Wed, Aug 27, 2014 at 8:08 AM, Kevin Benton blak...@gmail.com wrote:

 From what I understand, the intended projects for the incubator can't
 operate without neutron because they are just extensions/plugins/drivers.

 For example, if the DVR modifications to the reference reference L3 plugin
 weren't already being developed in the tree, DVR could have been developed
 in the incubator and then merged into Neutron once the bugs were ironed out
 so a huge string of Gerrit patches didn't need to be tracked. If that had
 happened, would it make sense to keep the L3 plugin as a completely
 separate project or merge it? I understand this is the approach the load
 balancer folks took by making Octavia a separate project, but I think it
 can still operate on its own, where the reference L3 plugin (and many of
 the other incubator projects) are just classes that expect to be able to
 make core Neutron calls.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Incubator concerns from packaging perspective

2014-08-26 Thread loy wolfe
On Sun, Aug 24, 2014 at 5:09 PM, Luke Gorrie l...@tail-f.com wrote:

 On 21 August 2014 12:12, Ihar Hrachyshka ihrac...@redhat.com wrote:

 Let the ones that are primarily interested in
  good quality of that code (vendors) to drive development. And if some
 plugins become garbage, it's bad news for specific vendors; if neutron
 screws because of lack of concentration on core features and open
 source plugins, everyone is doomed.


 Completely agree with this sentiment. Is there a crisp distinction between
 a vendor plugin and an open source plugin though?


This topic is interesting: should all opensource backend drivers be put
into the tree?

But as Kyle has mentioned earlier, Incubator is not the place to discuss
in-tree / out-tree for 3rd vs. built-in drivers, but the place to bake
newly introduced API and resource model for fast iteration, so I'll forward
this topic in another thread.



 The Snabb NFV (http://snabb.co/nfv.html) driver superficially looks like
 a vendor plugin but is actually completely open source. The development is
 driven by end-user organisations who want to make the standard upstream
 Neutron support their NFV use cases.

 We are looking for a good way to engage with the upstream community. In
 this cycle we have found kindred spirits in the NFV subteam., but we did
 not find a good way to engage with Neutron upstream (see
 https://review.openstack.org/#/c/116476/). It would be wonderful if there
 is a suitable process available for us to use in Kilo e.g. incubation.

 Cheers,
 -Luke



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] Mech driver as out-of-tree add-on

2014-08-26 Thread loy wolfe
Forwarded from other thread discussing about incubator:
http://lists.openstack.org/pipermail/openstack-dev/2014-August/044135.html



 Completely agree with this sentiment. Is there a crisp distinction between
 a vendor plugin and an open source plugin though?


I think that opensource is not the only factor, it's about built-in vs.
3rd backend. Built-in must be opensource, but opensource is not necessarily
built-in. By my thought, current OVS and linuxbridge are built-in, but shim
RESTful proxy for all kinds of sdn controller should be 3rd, for they keep
all virtual networking data model and service logic in their own places,
using Neutron API just as the NB shell (they can't even co-work with
built-in l2pop driver for vxlan/gre network type today).

As for the Snabb or DPDKOVS (they also plan to support official qemu
vhost-user), or some other similar contributions, if one or two of them win
in the war of this high performance userspace vswitch, and receive large
common interest, then it may be accepted as built-in.



 The Snabb NFV (http://snabb.co/nfv.html) driver superficially looks like
 a vendor plugin but is actually completely open source. The development is
 driven by end-user organisations who want to make the standard upstream
 Neutron support their NFV use cases.

 We are looking for a good way to engage with the upstream community. In
 this cycle we have found kindred spirits in the NFV subteam., but we did
 not find a good way to engage with Neutron upstream (see
 https://review.openstack.org/#/c/116476/). It would be wonderful if there
 is a suitable process available for us to use in Kilo e.g. incubation.

 Cheers,
 -Luke

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] Mech driver as out-of-tree add-on

2014-08-26 Thread loy wolfe
On Wed, Aug 27, 2014 at 12:42 PM, Kevin Benton blak...@gmail.com wrote:

 I think that opensource is not the only factor, it's about built-in
 vs. 3rd backend. Built-in must be opensource, but opensource is not
 necessarily built-in. By my thought, current OVS and linuxbridge are
 built-in, but shim RESTful proxy for all kinds of sdn controller should be
 3rd, for they keep all virtual networking data model and service logic in
 their own places, using Neutron API just as the NB shell (they can't even
 co-work with built-in l2pop driver for vxlan/gre network type today).


 I understand the point you are trying to make, but this blanket statement
 about the data model of drivers/plugins with REST backends is wrong. Look
 at the ODL mechanism driver for a counter-example.[1] The data is still
 stored in Neutron and all of the semantics of the API are maintained. The
 l2pop driver is to deal with decentralized overlays, so I'm not sure how
 its interoperability with the ODL driver is relevant.


If we create a vxlan network,  then can we bind some ports to built-in ovs
driver, and other ports to ODL driver? linux bridge agnet, ovs agent,
ofagent can co-exist in the same vxlan network, under the common l2pop
mechanism. By that scenery, I'm not sure whether ODL can just add to them
in a heterogeneous multi-backend architecture , or work exclusively and
have to take over all the functionality.



 1.
 https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/mechanism_odl.py



 On Tue, Aug 26, 2014 at 7:14 PM, loy wolfe loywo...@gmail.com wrote:

 Forwarded from other thread discussing about incubator:
 http://lists.openstack.org/pipermail/openstack-dev/2014-August/044135.html



 Completely agree with this sentiment. Is there a crisp distinction
 between a vendor plugin and an open source plugin though?


 I think that opensource is not the only factor, it's about built-in vs.
 3rd backend. Built-in must be opensource, but opensource is not necessarily
 built-in. By my thought, current OVS and linuxbridge are built-in, but shim
 RESTful proxy for all kinds of sdn controller should be 3rd, for they keep
 all virtual networking data model and service logic in their own places,
 using Neutron API just as the NB shell (they can't even co-work with
 built-in l2pop driver for vxlan/gre network type today).

 As for the Snabb or DPDKOVS (they also plan to support official qemu
 vhost-user), or some other similar contributions, if one or two of them win
 in the war of this high performance userspace vswitch, and receive large
 common interest, then it may be accepted as built-in.



 The Snabb NFV (http://snabb.co/nfv.html) driver superficially looks
 like a vendor plugin but is actually completely open source. The
 development is driven by end-user organisations who want to make the
 standard upstream Neutron support their NFV use cases.

 We are looking for a good way to engage with the upstream community. In
 this cycle we have found kindred spirits in the NFV subteam., but we did
 not find a good way to engage with Neutron upstream (see
 https://review.openstack.org/#/c/116476/). It would be wonderful if
 there is a suitable process available for us to use in Kilo e.g. incubation.

 Cheers,
 -Luke

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Incubator concerns from packaging perspective

2014-08-21 Thread loy wolfe
On Thu, Aug 21, 2014 at 12:28 AM, Salvatore Orlando sorla...@nicira.com
wrote:

 Some comments inline.

 Salvatore

 On 20 August 2014 17:38, Ihar Hrachyshka ihrac...@redhat.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512

 Hi all,

 I've read the proposal for incubator as described at [1], and I have
 several comments/concerns/suggestions to this.

 Overall, the idea of giving some space for experimentation that does
 not alienate parts of community from Neutron is good. In that way, we
 may relax review rules and quicken turnaround for preview features
 without loosing control on those features too much.

 Though the way it's to be implemented leaves several concerns, as follows:

 1. From packaging perspective, having a separate repository and
 tarballs seems not optimal. As a packager, I would better deal with a
 single tarball instead of two. Meaning, it would be better to keep the
 code in the same tree.

 I know that we're afraid of shipping the code for which some users may
 expect the usual level of support and stability and compatibility.
 This can be solved by making it explicit that the incubated code is
 unsupported and used on your user's risk. 1) The experimental code
 wouldn't probably be installed unless explicitly requested, and 2) it
 would be put in a separate namespace (like 'preview', 'experimental',
 or 'staging', as the call it in Linux kernel world [2]).

 This would facilitate keeping commit history instead of loosing it
 during graduation.

 Yes, I know that people don't like to be called experimental or
 preview or incubator... And maybe neutron-labs repo sounds more
 appealing than an 'experimental' subtree in the core project. Well,
 there are lots of EXPERIMENTAL features in Linux kernel that we
 actively use (for example, btrfs is still considered experimental by
 Linux kernel devs, while being exposed as a supported option to RHEL7
 users), so I don't see how that naming concern is significant.


 I think this is the whole point of the discussion around the incubator and
 the reason for which, to the best of my knowledge, no proposal has been
 accepted yet.


 2. If those 'extras' are really moved into a separate repository and
 tarballs, this will raise questions on whether packagers even want to
 cope with it before graduation. When it comes to supporting another
 build manifest for a piece of code of unknown quality, this is not the
 same as just cutting part of the code into a separate
 experimental/labs package. So unless I'm explicitly asked to package
 the incubator, I wouldn't probably touch it myself. This is just too
 much effort (btw the same applies to moving plugins out of the tree -
 once it's done, distros will probably need to reconsider which plugins
 they really want to package; at the moment, those plugins do not
 require lots of time to ship them, but having ~20 separate build
 manifests for each of them is just too hard to handle without clear
 incentive).


 One reason instead for moving plugins out of the main tree is allowing
 their maintainers to have full control over them.
 If there was a way with gerrit or similars to give somebody rights to
 merge code only on a subtree I probably would not even consider the option
 of moving plugin and drivers away. From my perspective it's not that I
 don't want them in the main tree, it's that I don't think it's fair for
 core team reviewers to take responsibility of approving code that they
 can't fully tests (3rd partt CI helps, but is still far from having a
 decent level of coverage).


It's also unfair that core team reviewers are forced to spend time on 3rd
plugins and drivers under existing process. There are so many 3rd
networking backend technologies, from hardware to controller, anyone can
submit plugins and drivers to the tree, and for the principle of neutrality
we can't agree some and refuse others' reviewing request. Then reviewers'
time slot are full of these 3rd backend related work, leaving less time on
the most important and urgent thing: improve Neutron core architecture to
the same mature level like Nova as soon as possible.





 3. The fact that neutron-incubator is not going to maintain any stable
 branches for security fixes and major failures concerns me too. In
 downstream, we don't generally ship the latest and greatest from PyPI.
 Meaning, we'll need to maintain our own downstream stable branches for
 major fixes. [BTW we already do that for python clients.]


 This is a valid point. We need to find an appropriate trade off. My
 thinking was that incubated projects could be treated just like client
 libraries from a branch perspective.



 4. Another unclear part of the proposal is that notion of keeping
 Horizon and client changes required for incubator features in
 neutron-incubator. AFAIK the repo will be governed by Neutron Core
 team, and I doubt the team is ready to review Horizon changes (?). I
 think I don't understand how we're going to handle that. Can we 

Re: [openstack-dev] Thought on service plugin architecture (was [Neutron][QoS] Request to be considered for neutron-incubator)

2014-08-20 Thread loy wolfe
On Wed, Aug 20, 2014 at 7:03 PM, Salvatore Orlando sorla...@nicira.com
wrote:

 As the original thread had a completely different subject, I'm starting a
 new one here.

 More specifically the aim of this thread is about:
 1) Define when a service is best implemented with a service plugin or with
 a ML2 driver
 2) Discuss how bindings between a core resource and the one provided by
 the service plugin should be exposed at the management plane, implemented
 at the control plane, and if necessary also at the data plane.

 Some more comments inline.

 Salvatore


 When a port is created, and it has Qos enforcement thanks to the service
 plugin,
 let's assume that a ML2 Qos Mech Driver can fetch Qos info and send
 them back to the L2 agent.
 We would probably need a Qos Agent which communicates with the plugin
 through a dedicated topic.


 A distinct agent has pro and cons. I think however that we should try and
 limit the number of agents on the hosts to a minimum. And this minimum in
 my opinion should be 1! There is already a proposal around a modular agent
 which should be able of loading modules for handling distinct services. I
 think that's the best way forward.



+1
consolidated modular agent can greatly reduce rpc communication with
plugin, and redundant code . If we can't merge it to a single Neutron
agent now, we can at least merge into two agents: modular L2 agent, and
modular L3+ agent



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] Mech driver as out-of-tree add-on

2014-08-14 Thread loy wolfe
On Thu, Aug 14, 2014 at 4:22 PM, Mathieu Rohon mathieu.ro...@gmail.com
wrote:

 Hi,

 I would like to add that it would be harder for the community to help
 maintaining drivers.
 such a work [1] wouldn't have occured with an out of tree ODL driver.


+1.
It's better to move all MD for none built-in backend out of tree,
maintaining these drivers shouldn't be the responsibility of community. Not
only MD, but also plugin, agent should all obey this rule



 [1] https://review.openstack.org/#/c/96459/

 On Wed, Aug 13, 2014 at 1:09 PM, Robert Kukura kuk...@noironetworks.com
 wrote:
  One thing to keep in mind is that the ML2 driver API does sometimes
 change,
  requiring updates to drivers. Drivers that are in-tree get updated along
  with the driver API change. Drivers that are out-of-tree must be updated
 by
  the owner.
 
  -Bob
 
 
  On 8/13/14, 6:59 AM, ZZelle wrote:
 
  Hi,
 
 
  The important thing to understand is how to integrate with neutron
 through
  stevedore/entrypoints:
 
 
 https://github.com/dave-tucker/odl-neutron-drivers/blob/master/setup.cfg#L32-L34
 
 
  Cedric
 
 
  On Wed, Aug 13, 2014 at 12:17 PM, Dave Tucker d...@dtucker.co.uk
 wrote:
 
  I've been working on this for OpenDaylight
  https://github.com/dave-tucker/odl-neutron-drivers
 
  This seems to work for me (tested Devstack w/ML2) but YMMV.
 
  -- Dave
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] Mech driver as out-of-tree add-on

2014-08-14 Thread loy wolfe
On Thu, Aug 14, 2014 at 9:07 PM, Kyle Mestery mest...@mestery.com wrote:

 I also feel like the drivers/plugins are currently BEYOND a tipping
 point, and are in fact dragging down velocity of the core project in
 many ways. I'm working on a proposal for Kilo where we move all
 drivers/plugins out of the main Neutron tree and into a separate git


not all drivers/plugins, but most which are not built-in. For example,
ML2 with ovs/linux/sriov MD and l2pop MD should be kept in tree as the
default built-in backend, but all vendor specific MD and shim REST proxy
like all kinds of SDN controller MD should be removed out.


 repository under the networking program. We have way too many drivers,
 requiring way too man review cycles, for this to be a sustainable
 model going forward. Since the main reason plugin/driver authors want
 their code upstream is to be a part of the simultaneous release, and
 thus be packaged by distributions, having a separate repository for
 these will satisfy this requirement. I'm still working through the
 details around reviews of this repository, etc.

 Also, I feel as if the level of passion on the mailing list has died
 down a bit, so I thought I'd send something out to try and liven
 things up a bit. It's been somewhat non-emotional here for a day or
 so. :)

 Thanks,
 Kyle

 On Thu, Aug 14, 2014 at 5:09 AM, Salvatore Orlando sorla...@nicira.com
 wrote:
  I think there will soon be a discussion regarding what the appropriate
  location for plugin and drivers should be.
  My personal feeling is that Neutron has simply reached the tipping point
  where the high number of drivers and plugins is causing unnecessary load
 for
  the core team and frustration for the community.
 
  There I would totally support Luke's initiative about maintaining an
  out-of-tree ML2 driver. On the other hand, a plugin/driver diaspora
 might
  also have negative consequences such as frequent breakages such as those
 Bob
  was mentioning or confusion for users which might need to end up fetching
  drivers from disparate sources.
 
  As mentioned during the last Neutron IRC meeting this is another
 process
  aspect which will be discussed soon, with the aim of defining a plan for:
  - drastically reduce the number of plugins and drivers which must be
  maintained in the main source tree
  - enhance control of plugin/driver maintainers over their own code
  - preserve the ability of doing CI checks on gerrit as we do today
  - raise the CI bar (maybe finally set the smoketest as a minimum
  requirement?)
 
  Regards,
  Salvatore
 
 
 
  On 14 August 2014 11:47, loy wolfe loywo...@gmail.com wrote:
 
 
 
 
  On Thu, Aug 14, 2014 at 4:22 PM, Mathieu Rohon mathieu.ro...@gmail.com
 
  wrote:
 
  Hi,
 
  I would like to add that it would be harder for the community to help
  maintaining drivers.
  such a work [1] wouldn't have occured with an out of tree ODL driver.
 
 
  +1.
  It's better to move all MD for none built-in backend out of tree,
  maintaining these drivers shouldn't be the responsibility of community.
 Not
  only MD, but also plugin, agent should all obey this rule
 
 
 
  [1] https://review.openstack.org/#/c/96459/
 
  On Wed, Aug 13, 2014 at 1:09 PM, Robert Kukura 
 kuk...@noironetworks.com
  wrote:
   One thing to keep in mind is that the ML2 driver API does sometimes
   change,
   requiring updates to drivers. Drivers that are in-tree get updated
   along
   with the driver API change. Drivers that are out-of-tree must be
   updated by
   the owner.
  
   -Bob
  
  
   On 8/13/14, 6:59 AM, ZZelle wrote:
  
   Hi,
  
  
   The important thing to understand is how to integrate with neutron
   through
   stevedore/entrypoints:
  
  
  
 https://github.com/dave-tucker/odl-neutron-drivers/blob/master/setup.cfg#L32-L34
  
  
   Cedric
  
  
   On Wed, Aug 13, 2014 at 12:17 PM, Dave Tucker d...@dtucker.co.uk
   wrote:
  
   I've been working on this for OpenDaylight
   https://github.com/dave-tucker/odl-neutron-drivers
  
   This seems to work for me (tested Devstack w/ML2) but YMMV.
  
   -- Dave
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http

Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-12 Thread loy wolfe
Hi Paul,

Below are some other useful GBP reference pages:
https://wiki.opendaylight.org/view/Project_Proposals:Group_Based_Policy_Plugin
http://www.cisco.com/en/US/prod/collateral/netmgtsw/ps13004/ps13460/white-paper-c11-729906_ns1261_Networking_Solutions_White_Paper.html

I think the root cause of this long argument, is that GBP core model was
not designed native for Neutron, and they are introduced into Neutron so
radically, without careful tailoring and adaption. Maybe the GBP team also
don't want to do so, their intention is to maintain a unified model across
all kinds of platform including Neutron, Opendaylight, ACI/Opflex, etc.

However, redundancy and duplication exists between EP/EPG/BD/RD and
Port/Network/Subnet. So mapping is used between these objects, and I think
this is why so many voice to request moving GBP out and on top of Neutron.

Will GBP simply be an *addition*? It absolutely COULD be, but objectively
speaking, it's core model also allow it to BE-ABLE-TO take over Neutron
core resource (see the wiki above). GBP mapping spec suggested a nova -nic
extension to handle EP/EPG resource directly, thus all original Neutron
core resource can be shadowed away from user interface: GBP became the new
openstack network API :-) However no one can say depreciate Neutron core
here and now, but shall we leave Neutron core just as *traditional/legacy*?

Personally I prefer not to throw NW-Policy out of Neutron, but at the
perquisite that its core model should be reviewed and tailored. A new
lightweight model carefully designed native for Neutron is needed, but not
directly copying a whole bunch of monolithic core resource from existing
other system.

Here is the very basic suggestion: because core value of GBP is policy
template with contracts , throw away EP/EPG/L2P/L3P model while not just
renaming them again and again. APPLY policy template to existing Neutron
core resource, but not reinvent similar concept in GBP and then do the
mapping.


On Mon, Aug 11, 2014 at 9:12 PM, CARVER, PAUL pc2...@att.com wrote:

loy wolfe [mailto:loywo...@gmail.com] wrote:



 Then since Network/Subnet/Port will never be treated just as LEGACY

 COMPATIBLE role, there is no need to extend Nova-Neutron interface to

 follow the GBP resource. Anyway, one of optional service plugins inside

 Neutron shouldn't has any impact on Nova side.



 This gets to the root of why I was getting confused about Jay and others

 having Nova related concerns. I was/am assuming that GBP is simply an

 *additional* mechanism for manipulating Neutron, not a deprecation of any

 part of the existing Neutron API. I think Jay's concern and the reason

 why he keeps mentioning Nova as the biggest and most important consumer

 of Neutron's API stems from an assumption that Nova would need to change

 to use the GBP API.





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-11 Thread loy wolfe
Hi Sumit,

First I want to say I'm not opposed to GBP itself, but has many confusion
about it's core resource model and how it will integrate with neutron core.

Do you mean for whatever GBP backend configured in any future Neutron
deployment, so long as they are in tree, then ML2 core plugin shall always
be there to expose the Neutron core resource: Network/Subnet/Port?

Then since Network/Subnet/Port will never be treated just as LEGACY
COMPATIBLE role, there is no need to extend Nova-Neutron interface to
follow the GBP resource. Anyway, one of optional service plugins inside
Neutron shouldn't has any impact on Nova side.

If we agree on this point, core model of GBP should be reviewed, but not
just focus on naming convention whether it should be called EP or policy
target, then leaving some words here to emphasize GBP is only
complementary. In fact, EP/EPG/BD/RD has been designed to be ABLE TO
REPLACE Neutron core resource, with mapping as the first step to keep
compatible.

In fact, if Neutron core source shall never to be swapped out, GBP core
object could be greatly simplified, because mapping already means
redundancy :-) Only policy-group is meaningful, behind which are very
important policy concept: consumer/producer contracts. After PG is defined,
it should be directly applied to existing Neutron core resource, but not
create similar concept of EP/L2P/L3P, then mapping to them. Mapping is
redundant, and I can understand it's necessity only if someday those
neutron core resource are planned to be swapped out.

Simple conclusion: if GBP is just an optional complementary, then after
defining policy template, directly APPLYING, but not create similar
redundant resource and then MAPPING it to existing neutron core resource.


On Sat, Aug 9, 2014 at 3:35 AM, Jay Pipes jaypi...@gmail.com wrote:

 On 08/08/2014 12:29 PM, Sumit Naiksatam wrote:

 Hi Jay, To extend Ivar's response here, the core resources and core
 plugin configuration does not change with the addition of these
 extensions. The mechanism to implement the GBP extensions is via a
 service plugin. So even in a deployment where a GBP service plugin is
 deployed with a driver which interfaces with a backend that perhaps
 directly understands some of the GBP constructs, that system would
 still need to have a core plugin configured that honors Neutron's core
 resources. Hence my earlier comment that GBP extensions are
 complementary to the existing core resources (in much the same way as
 the existing extensions in Neutron).


 OK, thanks Sumit. That clearly explains things for me.

 Best,
 -jay



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-04 Thread loy wolfe
+1 mark


On Tue, Aug 5, 2014 at 4:27 AM, Mark McClain mmccl...@yahoo-inc.com wrote:

  All-

 tl;dr

 * Group Based Policy API is the kind of experimentation we be should
 attempting.
 * Experiments should be able to fail fast.
 * The master branch does not fail fast.
 * StackForge is the proper home to conduct this experiment.


 Why this email?
 ---
 Our community has been discussing and working on Group Based Policy (GBP)
 for many months.  I think the discussion has reached a point where we need
 to openly discuss a few issues before moving forward.  I recognize that
 this discussion could create frustration for those who have invested
 significant time and energy, but the reality is we need ensure we are
 making decisions that benefit all members of our community (users,
 operators, developers and vendors).

 Experimentation
 
 I like that as a community we are exploring alternate APIs.  The process
 of exploring via real user experimentation can produce valuable results.  A
 good experiment should be designed to fail fast to enable further trials
 via rapid iteration.

 Merging large changes into the master branch is the exact opposite of
 failing fast.

 The master branch deliberately favors small iterative changes over time.
  Releasing a new version of the proposed API every six months limits our
 ability to learn and make adjustments.

 In the past, we’ve released LBaaS, FWaaS, and VPNaaS as experimental APIs.
  The results have been very mixed as operators either shy away from
 testing/offering the API or embrace the API with the expectation that the
 community will provide full API support and migration.  In both cases, the
 experiment fails because we either could not get the data we need or are
 unable to make significant changes without accepting a non-trivial amount
 of technical debt via migrations or draft API support.

 Next Steps
 --
 Previously, the GPB subteam used a Github account to host the development,
 but the workflows and tooling do not align with OpenStack's development
 model. I’d like to see us create a group based policy project in
 StackForge.  StackForge will host the code and enable us to follow the same
 open review and QA processes we use in the main project while we are
 developing and testing the API. The infrastructure there will benefit us as
 we will have a separate review velocity and can frequently publish
 libraries to PyPI.  From a technical perspective, the 13 new entities in
 GPB [1] do not require any changes to internal Neutron data structures.
  The docs[2] also suggest that an external plugin or service would work to
 make it easier to speed development.

 End State
 -
 APIs require time to fully bake and right now it is too early to know the
 final outcome.  Using StackForge will allow the team to retain all of its
 options including: merging the code into Neutron, adopting the repository
 as sub-project of the Network Program, leaving the project in StackForge
 project or learning that users want something completely different.  I
 would expect that we'll revisit the status of the repo during the L or M
 cycles since the Kilo development cycle does not leave enough time to
 experiment and iterate.


 mark

 [1]
 http://git.openstack.org/cgit/openstack/neutron-specs/tree/specs/juno/group-based-policy-abstraction.rst#n370
 [2]
 https://docs.google.com/presentation/d/1Nn1HjghAvk2RTPwvltSrnCUJkidWKWY2ckU7OYAVNpo/edit#slide=id.g12c5a79d7_4078
 [3]

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Local message bus to connect all neutron agents in a host

2014-07-30 Thread loy wolfe
openstack is designed as decoupled between all modules, both nova and all
neutron plugins: ml2, l3, advanced...however, their agents may need some
sort of interaction. Here are two examples:

1) DVR. L2 population already push all portcontext for a subnet to CN, but
L3 agent on it has to get once more. If the two agents can share port
information, rpc message can be cut half.

2) VIF-PLUG. For each type L2 agent, they have backend specific method to
detect different vif plug event for edge binding. In upcoming ML2 modular
agent, this detection and scanning work has to be done by each resource
driver, although this is just a general event.

How about starting a local message bus on each host node (etc. D-BUS or
kbus), while all agents on that host can notify to each other directly? The
potential benefits are less rpc interaction with plugin, and less backend
technology specific code.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][SR-IOV]: RE: ML2 mechanism driver for SR-IOV capable NIC based switching, ...

2014-07-24 Thread loy wolfe
On Fri, Jul 25, 2014 at 5:43 AM, Robert Li (baoli) ba...@cisco.com wrote:

 Hi Kyle,

 Sorry I missed your queries on the IRC channel today. I was thinking about
 this whole BP. After chatting with Irena this morning, I think that I
 understand what this BP is trying to achieve overall. I also had a chat
 with Sandhya afterwards. I¹d like to discuss a few things in here:

   ‹ Sandhya¹s MD is going to support cisco¹s VMFEX. Overall her code¹s
 structure would look like very much similar to Irena¹s patch in part 1.
 However, she cannot simply inherit from SriovNicSwitchMechanismDriver. The
 differences for her code are: 1) get_vif_details() would populate
 profileid (rather than vlanid), 2) she¹d need to do vmfex specific
 processing in try_to_bind(). We¹re thinking that with a little of
 generalization, SriovNicSwitchMechanismDriver() (with a changed name such
 as SriovMechanismDriver()) can be used both for nic switch and vmfex. It
 would look like in terms of class hierarchy:
  SriovMechanismDriver
 SriovNicSwitchMechanismDriver



SriovNicSwitchMellanoxMechanismDriver
SriovNicSwitchIntelMechanismDriver
SriovNicSwitchEmulexMechanismDriver

do you also mean that for nicswitch besides QBR? If so I think this make
sense, and user can choose to only load vendor agnostic SriovNicSwitchMD
and SriovQBRMD, without any PCI vendor info. Also we need a vendor
agnostic agent, which just use standard linux ip link command to config
NIC. However, if users want to use advanced features provided by vendor,
they can load inherited vendor specific MD and similarly config in agent
side.


 SriovQBRMechanismDriver
  SriovCiscoVmfexMechanismDriver

 Code duplication would be reduced significantly. The change would be:
‹ make get_vif_details an abstract method in SriovMechanismDriver
‹ make an abstract method to perform specific bind action required
 by a particular adaptor indicated in the PCI vendor info
‹ vif type and agent type should be set based on the PCI vendor
 info

 A little change of patch part 1 would achieve the above

   ‹ Originally I thought that SR-IOV port¹s status would be depending on
 the Sriov Agent (patch part 2). After chatting with Irena, this is not the
 case. So all the SR-IOV ports will be active once created or bound
 according to the try_to_bind() method. In addition, the current Sriov
 Agent (patch part 2) only supports port admin status change for mlnx
 adaptor. I think these caveats need to be spelled out explicitly to avoid
 any confusion or misunderstanding, at least in the documentation.

   ‹ Sandhya has planned to support both intel and vmfex in her MD. This
 requires a hybrid sriov mech driver that populates vif details based on
 the PCI vendor info in the port. Another way to do this is to run two MDs
 in the same time, one supporting intel, the other vmfex. This would work
 well with the above classes. But it requires change of the two config
 options (in Irena¹s patch part one) so that per MD config options can be
 specified. I¹m not sure if this is practical in real deployment (meaning
 use of SR-IOV adaptors from different vendors in the same deployment), but
 I think it¹s doable within the existing ml2 framework.


Absolutely, this is a mandatory requirement by many customers! They don't
care which vendor the NIC is from at all.



 we¹ll go over the above in the next sr-iov IRC meeting as well.

 Thanks,
 Robert









 On 7/24/14, 1:55 PM, Kyle Mestery (Code Review) rev...@openstack.org
 wrote:

 Kyle Mestery has posted comments on this change.
 
 Change subject: ML2 mechanism driver for SR-IOV capable NIC based
 switching, Part 2
 ..
 
 
 Patch Set 3: Code-Review+2 Workflow+1
 
 I believe Irena has answered all of Robert's questions. Any subsequent
 issues can be handled as a followup.
 
 --
 To view, visit https://review.openstack.org/107651
 To unsubscribe, visit https://review.openstack.org/settings
 
 Gerrit-MessageType: comment
 Gerrit-Change-Id: I533ccee067935326d5837f90ba321a962e8dc2a6
 Gerrit-PatchSet: 3
 Gerrit-Project: openstack/neutron
 Gerrit-Branch: master
 Gerrit-Owner: Berezovsky Irena ire...@mellanox.com
 Gerrit-Reviewer: Akihiro Motoki mot...@da.jp.nec.com
 Gerrit-Reviewer: Arista Testing arista-openstack-t...@aristanetworks.com
 
 Gerrit-Reviewer: Baodong (Robert) Li ba...@cisco.com
 Gerrit-Reviewer: Berezovsky Irena ire...@mellanox.com
 Gerrit-Reviewer: Big Switch CI openstack...@bigswitch.com
 Gerrit-Reviewer: Brocade CI openstack_ger...@brocade.com
 Gerrit-Reviewer: Brocade OSS CI dl-grp-vyatta-...@brocade.com
 Gerrit-Reviewer: Cisco Neutron CI cisco-openstack-neutron...@cisco.com
 Gerrit-Reviewer: Freescale CI fslo...@freescale.com
 Gerrit-Reviewer: Hyper-V CI hyper-v...@microsoft.com
 Gerrit-Reviewer: Jenkins

Re: [openstack-dev] [nova][neutron] Networks without subnets

2014-07-22 Thread loy wolfe
it is another BP about NFV:

https://review.openstack.org/#/c/97715


On Tue, Jul 22, 2014 at 9:37 AM, Isaku Yamahata isaku.yamah...@gmail.com
wrote:

 On Mon, Jul 21, 2014 at 02:52:04PM -0500,
 Kyle Mestery mest...@mestery.com wrote:

   Following up with post SAD status:
  
   * https://review.openstack.org/#/c/99873/ ML2 OVS: portsecurity
 extension support
  
   Remains unapproved, no negative feedback on current revision.
  
   * https://review.openstack.org/#/c/106222/ Add Port Security
 Implementation in ML2 Plugin
  
   Has a -2 to highlight the significant overlap with 99873 above.
  
   Although there were some discussions about these last week I am not
 sure we reached consensus on whether either of these (or even both of them)
 are the correct path forward - particularly to address the problem Brent
 raised w.r.t. to creation of networks without subnets - I believe this
 currently still works with nova-network?
  
   Regardless, I am wondering if either of the spec authors intend to
 propose these for a spec freeze exception?
  
  For the port security implementation in ML2, I've had one of the
  authors reach out to me. I'd like them to send an email to the
  openstack-dev ML though, so we can have the discussion here.

 As I commented at the gerrit, we, two authors of port security
 (Shweta and me), have agreed that the blueprints/specs will be unified.
 I'll send a mail for a spec freeze exception soon.

 thanks,
 --
 Isaku Yamahata isaku.yamah...@gmail.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Spec Freeze Exception] ml2-ovs-portsecurity

2014-07-21 Thread loy wolfe
any relation with this BP?

https://review.openstack.org/#/c/97715/6/specs/juno/nfv-unaddressed-interfaces.rst



On Tue, Jul 22, 2014 at 11:17 AM, Isaku Yamahata isaku.yamah...@gmail.com
wrote:


 I'd like to request Juno spec freeze exception for ML2 OVS portsecurity
 extension.

 - https://review.openstack.org/#/c/99873/
   ML2 OVS: portsecurity extension support

 - https://blueprints.launchpad.net/neutron/+spec/ml2-ovs-portsecurity
   Add portsecurity support to ML2 OVS mechanism driver

 The spec/blueprint adds portsecurity extension to ML2 plugin and implements
 it in ovs mechanism driver with iptables_firewall driver.
 The spec has gotten 5 +1 with many respins.
 This feature will be a basement to run network service within VM.

 There is another spec whose goal is same.
 - https://review.openstack.org/#/c/106222/
   Add Port Security Implementation in ML2 Plugin
 The author, Shweta, and I have agreed to consolidate those specs/blueprints
 and unite for the same goal.

 Thanks,
 --
 Isaku Yamahata isaku.yamah...@gmail.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] Networks without subnets

2014-07-14 Thread loy wolfe
On Mon, Jul 14, 2014 at 4:14 PM, Isaku Yamahata isaku.yamah...@gmail.com
wrote:

 Hi.

  4) with no-port-security option, we should implement ovs-plug instead
  ovs-hybird-plug, to totally bypass qbr but not just changing iptable
 rules.
  the performance of later is 50% lower for small size packet even if the
  iptable is empty, and 20% lower even if we disable iptable hook on linux
  bridge.

 Is this only for performance reason?
 What do you think about disabling and then enabling port-security?
 portsecurity API allows to dynamically change the setting after port
 plugging.

 thanks,


the idea way is that OVS can hook to iptable chain at a per-flow basis, but
now we have to do some trade off. Requirement of no filter comes from NFV,
VNF VMs should not need dynamically enable/disable filter, and they are IO
performance critical apps.

However, at API level we may need to distinguish these two cases: for VNF
VM we need to totally bypass qbr with ''no-port-filter setting and
ovs-plug, while for some other certain VM we just need something like
default-empty-filter, still with ovs-hybrid-plug.



 On Mon, Jul 14, 2014 at 11:19:05AM +0800,
 loy wolfe loywo...@gmail.com wrote:

  port with flexible ip address setting is necessary. I collected several
 use
  cases:
 
  1) when creating a port, we need to indicate that,
  [A] binding to none of subnet(no ip address);
  [B] binding to all subnets;
  [C] binding to any subnet;
  [D] binding to explicitly list of subnets, and/or list of ip address
 in
  each subnet.
  It seems that existing code implement [C] as the default case.
 
  2) after created the port, we need to dynamically change it's address
  setting:
  [A] remove a single ip address
  [B] remove all ip address of a subnet
  [C] add ip address on specified subnet
  it's not the same as allowed-addr-pair, but it really need to allocate
 ip
  in the subnet.
 
  3) we need to allow router add interface by network uuid, not only subnet
  uuid
  today L3 router add interface by subnet, but it's not the common use case
  that a L2 segment connect to different router interface with it's
 different
  subnets. when a network has multiple subnets, we should allow the network
  but not the subnet to attach the router. Also, we should allow a network
  without any subnet (or a port without ip address) to attach to a router
  (some like a brouter), while adding/deleting interface address of
 different
  subnets dynamically later.
 
  this  feature should also be helpful for plug-gable external network BP.
 
  4) with no-port-security option, we should implement ovs-plug instead
  ovs-hybird-plug, to totally bypass qbr but not just changing iptable
 rules.
  the performance of later is 50% lower for small size packet even if the
  iptable is empty, and 20% lower even if we disable iptable hook on linux
  bridge.
 
 
 
  On Mon, Jul 14, 2014 at 9:56 AM, Kyle Mestery mest...@noironetworks.com
 
  wrote:
 
   On Fri, Jul 11, 2014 at 4:41 PM, Brent Eagles beag...@redhat.com
 wrote:
  
   Hi,
  
   A bug titled Creating quantum L2 networks (without subnets) doesn't
   work as expected (https://bugs.launchpad.net/nova/+bug/1039665) was
   reported quite some time ago. Beyond the discussion in the bug report,
   there have been related bugs reported a few times.
  
   * https://bugs.launchpad.net/nova/+bug/1304409
   * https://bugs.launchpad.net/nova/+bug/1252410
   * https://bugs.launchpad.net/nova/+bug/1237711
   * https://bugs.launchpad.net/nova/+bug/1311731
   * https://bugs.launchpad.net/nova/+bug/1043827
  
   BZs on this subject seem to have a hard time surviving. The get marked
   as incomplete or invalid, or in the related issues, the problem NOT
   related to the feature is addressed and the bug closed. We seem to
 dance
   around actually getting around to implementing this. The multiple
   reports show there *is* interest in this functionality but at the
 moment
   we are without an actual implementation.
  
   At the moment there are multiple related blueprints:
  
   * https://review.openstack.org/#/c/99873/ ML2 OVS: portsecurity
 extension support
   * https://review.openstack.org/#/c/106222/ Add Port Security
 Implementation in ML2 Plugin
   * https://review.openstack.org/#/c/97715 NFV unaddressed interfaces
  
   The first two blueprints, besides appearing to be very similar,
 propose
   implementing the port security extension currently employed by one
 of
   the neutron plugins. It is related to this issue as it allows a port
 to
   be configured indicating it does not want security groups to apply.
 This
   is relevant because without an address, a security group cannot be
   applied and this is treated as an error. Being able to specify
   skipping the security group criteria gets us a port on the network
   without an address, which is what happens when there is no subnet.
  
   The third approach is, on the face of it, related in that it proposes

Re: [openstack-dev] [nova][neutron] Networks without subnets

2014-07-13 Thread loy wolfe
port with flexible ip address setting is necessary. I collected several use
cases:

1) when creating a port, we need to indicate that,
[A] binding to none of subnet(no ip address);
[B] binding to all subnets;
[C] binding to any subnet;
[D] binding to explicitly list of subnets, and/or list of ip address in
each subnet.
It seems that existing code implement [C] as the default case.

2) after created the port, we need to dynamically change it's address
setting:
[A] remove a single ip address
[B] remove all ip address of a subnet
[C] add ip address on specified subnet
it's not the same as allowed-addr-pair, but it really need to allocate ip
in the subnet.

3) we need to allow router add interface by network uuid, not only subnet
uuid
today L3 router add interface by subnet, but it's not the common use case
that a L2 segment connect to different router interface with it's different
subnets. when a network has multiple subnets, we should allow the network
but not the subnet to attach the router. Also, we should allow a network
without any subnet (or a port without ip address) to attach to a router
(some like a brouter), while adding/deleting interface address of different
subnets dynamically later.

this  feature should also be helpful for plug-gable external network BP.

4) with no-port-security option, we should implement ovs-plug instead
ovs-hybird-plug, to totally bypass qbr but not just changing iptable rules.
the performance of later is 50% lower for small size packet even if the
iptable is empty, and 20% lower even if we disable iptable hook on linux
bridge.



On Mon, Jul 14, 2014 at 9:56 AM, Kyle Mestery mest...@noironetworks.com
wrote:

 On Fri, Jul 11, 2014 at 4:41 PM, Brent Eagles beag...@redhat.com wrote:

 Hi,

 A bug titled Creating quantum L2 networks (without subnets) doesn't
 work as expected (https://bugs.launchpad.net/nova/+bug/1039665) was
 reported quite some time ago. Beyond the discussion in the bug report,
 there have been related bugs reported a few times.

 * https://bugs.launchpad.net/nova/+bug/1304409
 * https://bugs.launchpad.net/nova/+bug/1252410
 * https://bugs.launchpad.net/nova/+bug/1237711
 * https://bugs.launchpad.net/nova/+bug/1311731
 * https://bugs.launchpad.net/nova/+bug/1043827

 BZs on this subject seem to have a hard time surviving. The get marked
 as incomplete or invalid, or in the related issues, the problem NOT
 related to the feature is addressed and the bug closed. We seem to dance
 around actually getting around to implementing this. The multiple
 reports show there *is* interest in this functionality but at the moment
 we are without an actual implementation.

 At the moment there are multiple related blueprints:

 * https://review.openstack.org/#/c/99873/ ML2 OVS: portsecurity
   extension support
 * https://review.openstack.org/#/c/106222/ Add Port Security
   Implementation in ML2 Plugin
 * https://review.openstack.org/#/c/97715 NFV unaddressed interfaces

 The first two blueprints, besides appearing to be very similar, propose
 implementing the port security extension currently employed by one of
 the neutron plugins. It is related to this issue as it allows a port to
 be configured indicating it does not want security groups to apply. This
 is relevant because without an address, a security group cannot be
 applied and this is treated as an error. Being able to specify
 skipping the security group criteria gets us a port on the network
 without an address, which is what happens when there is no subnet.

 The third approach is, on the face of it, related in that it proposes an
 interface without an address. However, on review it seems that the
 intent is not necessarily inline with the some of the BZs mentioned
 above. Indeed there is text that seems to pretty clearly state that it
 is not intended to cover the port-without-an-IP situation. As an aside,
 the title in the commit message in the review could use revising.

 In order to implement something that finally implements the
 functionality alluded to in the above BZs in Juno, we need to settle on
 a blueprint and direction. Barring the happy possiblity of a resolution
 beforehand, can this be made an agenda item in the next Nova and/or
 Neutron meetings?

 I think this is worth discussing. I've added this to the Team Discussion
 Topics section of the Neutron meeting [1] on 7-14-2014. I hope you can
 attend Brent!

 Thanks,
 Kyle

 [1]
 https://wiki.openstack.org/wiki/Network/Meetings#Team_Discussion_Topics


 Cheers,

 Brent

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list

Re: [openstack-dev] [Neutron][ML2] Support dpdk ovs with ml2 plugin

2014-07-10 Thread loy wolfe
+1

It's totally different between ovs and userspace ovs.
also, there is strong need to keep ovs even we have a userspace ovs in the
same host


On Fri, Jul 11, 2014 at 7:59 AM, Ian Wells ijw.ubu...@cack.org.uk wrote:

 On 10 July 2014 08:19, Czesnowicz, Przemyslaw 
 przemyslaw.czesnow...@intel.com wrote:

  Hi,



 Thanks for Your answers.



 Yep using binding:vif_details makes more sense. We would like to reuse
 VIF_TYPE_OVS and modify the nova to use the userspace vhost when ‘use_dpdk’
 flag is present.


 I admit it's a bit of a pedantic point, but connecting to OVS and
 connecting to userspace OVS would seem to be drastically different (neither
 type will work with the wrong switch) so using a different binding type
 would seem to be rather more appropriate than if(this flag is set) { do
 something completely different; }.


  What we are missing is how to inform the ml2 plugin/mechanism drivers
 when to put that ‘use_dpdk’ flag into vif_details.


 You might want to refer to the mail I sent yesterday evening about getting
 Nova to tell Neutron what binding types it supports.  I'm not sure it's the
 answer to your problem but it might well simplify it.

  On the node ovs_neutron_agent could look up datapath_type in ovsdb, but
 how can we provide that info to the plugin?

 Currently there is no mechanism to get node specific info into the ml2
 plugin (or at least we don’t see any).



 Any ideas on how this could be implemented?



 Regards

 Przemek

 *From:* Irena Berezovsky [mailto:ire...@mellanox.com
 ire...@mellanox.com]
 *Sent:* Thursday, July 10, 2014 8:08 AM
 *To:* OpenStack Development Mailing List (not for usage questions);
 Czesnowicz, Przemyslaw
 *Cc:* Mooney, Sean K
 *Subject:* RE: [openstack-dev] [Neutron][ML2] Support dpdk ovs with ml2
 plugin



 Hi,

 For passing  information from neutron to nova VIF Driver, you should use
 binding:vif_details dictionary.  You may not require new VIF_TYPE, but can
 leverage the existing VIF_TYPE_OVS, and add ‘use_dpdk’   in vif_details
 dictionary. This will require some rework of the existing libvirt
 vif_driver VIF_TYPE_OVS.



 Binding:profile is considered as input dictionary that is used to pass
 information required for port binding on Server side. You  may use
 binding:profile to pass in  a dpdk ovs request, so it will be taken into
 port binding consideration by ML2 plugin.



 I am not sure regarding new vnic_type, since it will require  port owner
 to pass in the requested type. Is it your intention? Should the port owner
 be aware of dpdk ovs usage?

 There is also VM scheduling consideration that if certain vnic_type is
 requested, VM should be scheduled on the node that can satisfy the request.



 Regards,

 Irena





 *From:* loy wolfe [mailto:loywo...@gmail.com loywo...@gmail.com]
 *Sent:* Thursday, July 10, 2014 6:00 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Cc:* Mooney, Sean K
 *Subject:* Re: [openstack-dev] [Neutron][ML2] Support dpdk ovs with ml2
 plugin



 i think both a new vnic_type and a new vif_type should be added. now vnic
 has three types: normal, direct, macvtap, then we need a new type of
 uservhost.



 as for vif_type, now we have VIF_TYPE_OVS, VIF_TYPE_QBH/QBG, VIF_HW_VEB,
 so we need a new VIF_TYPE_USEROVS



 I don't think it's a good idea to directly reuse ovs agent, for we have
 to consider use cases that ovs and userovs co-exists. Now it's a little
 painful to fork and write a new agent, but it will be easier when ML2 agent
 BP is merged in the future. (
 https://etherpad.openstack.org/p/modular-l2-agent-outline)



 On Wed, Jul 9, 2014 at 11:08 PM, Czesnowicz, Przemyslaw 
 przemyslaw.czesnow...@intel.com wrote:

 Hi



 We (Intel Openstack team) would like to add support for dpdk based
 userspace openvswitch using mech_openvswitch and mech_odl from ML2 plugin.

 The dpdk enabled ovs comes in two flavours one is netdev incorporated
 into vanilla ovs the other is a fork of ovs with a dpdk datapath (
 https://github.com/01org/dpdk-ovs ).

 Both flavours use userspace vhost mechanism to connect the VMs to the
 switch.



 Our initial approach was to extend ovs vif bindings in nova and add a
 config parameter to specify when userspace vhost should be used.

 Spec : https://review.openstack.org/95805

 Code: https://review.openstack.org/100256



 Nova devs rejected this approach saying that Neutron should pass all
 necessary information to nova to select vif bindings.



 Currently we are looking for a way to pass information from Neutron to
 Nova that dpdk enabled ovs is being used while still being able to use
 mech_openvswitch and ovs_neutron_agent or mech_odl.




 We thought of two possible solutions:

 1.  Use binding_profile to provide node specific info to nova.

 Agent rpc api would be extended to allow agents to send node profile to
 neutron plugin.

 That info would be stored in db and passed to nova when binding on this
 specific host is requested.

 This could be used

Re: [openstack-dev] [Neutron][ML2] Support dpdk ovs with ml2 plugin

2014-07-09 Thread loy wolfe
i think both a new vnic_type and a new vif_type should be added. now vnic
has three types: normal, direct, macvtap, then we need a new type of
uservhost.

as for vif_type, now we have VIF_TYPE_OVS, VIF_TYPE_QBH/QBG, VIF_HW_VEB, so
we need a new VIF_TYPE_USEROVS

I don't think it's a good idea to directly reuse ovs agent, for we have to
consider use cases that ovs and userovs co-exists. Now it's a little
painful to fork and write a new agent, but it will be easier when ML2 agent
BP is merged in the future. (
https://etherpad.openstack.org/p/modular-l2-agent-outline)


On Wed, Jul 9, 2014 at 11:08 PM, Czesnowicz, Przemyslaw 
przemyslaw.czesnow...@intel.com wrote:

  Hi



 We (Intel Openstack team) would like to add support for dpdk based
 userspace openvswitch using mech_openvswitch and mech_odl from ML2 plugin.

 The dpdk enabled ovs comes in two flavours one is netdev incorporated into
 vanilla ovs the other is a fork of ovs with a dpdk datapath (
 https://github.com/01org/dpdk-ovs ).

 Both flavours use userspace vhost mechanism to connect the VMs to the
 switch.



 Our initial approach was to extend ovs vif bindings in nova and add a
 config parameter to specify when userspace vhost should be used.

 Spec : https://review.openstack.org/95805

 Code: https://review.openstack.org/100256



 Nova devs rejected this approach saying that Neutron should pass all
 necessary information to nova to select vif bindings.



 Currently we are looking for a way to pass information from Neutron to
 Nova that dpdk enabled ovs is being used while still being able to use
 mech_openvswitch and ovs_neutron_agent or mech_odl.




 We thought of two possible solutions:

 1.  Use binding_profile to provide node specific info to nova.

 Agent rpc api would be extended to allow agents to send node profile to
 neutron plugin.

 That info would be stored in db and passed to nova when binding on this
 specific host is requested.

 This could be used to support our use case or pass other info to nova (i.e
 name of integration bridge)



 2.  Let mech_openvswitch and mech_odl detect what binding type to use.

 When asked for port binding mech_openvswitch and mech_odl would call the
 agent or odl  to check what bindings to use (VIF_TYPE_OVS or
 VIF_TYPE_DPDKVHOST)



 So, what would be the best way to support our usecase, is it one of the
 above ?



 Best regards

 Przemek

 --
 Intel Shannon Limited
 Registered in Ireland
 Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
 Registered Number: 308263
 Business address: Dromore House, East Park, Shannon, Co. Clare

 This e-mail and any attachments may contain confidential material for the
 sole use of the intended recipient(s). Any review or distribution by others
 is strictly prohibited. If you are not the intended recipient, please
 contact the sender and delete all copies.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] [ML2] [L2GW] multiple l2gw project in ml2 team?

2014-07-02 Thread loy wolfe
I read the ml2 tracking reviews, find two similar spec for l2 gateway:

1) GW API: L2 bridging API - Piece 1: Basic use cases
https://review.openstack.org/#/c/93613/


2) API Extension for l2-gateway
https://review.openstack.org/#/c/100278/

also neutron external port spec has some relationship
https://review.openstack.org/#/c/87825/

all these spec address the same problem: how to establish bridging
connection between native neutron created vif and external port of physical
node. are there any unification action to merge these project?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DVR SNAT shortcut

2014-06-25 Thread loy wolfe
On Wed, Jun 25, 2014 at 10:29 PM, McCann, Jack jack.mcc...@hp.com wrote:

   If every compute node is
   assigned a public ip, is it technically able to improve SNAT packets
   w/o going through the network node ?

 It is technically possible to implement default SNAT at the compute node.

 One approach would be to use a single IP address per compute node as a
 default SNAT address shared by all VMs on that compute node.  While this
 optimizes for number of external IPs consumed per compute node, the
 downside
 is having VMs from different tenants sharing the same default SNAT IP
 address
 and conntrack table.  That downside may be acceptable for some deployments,
 but it is not acceptable in others.

 In fact, it is only acceptable in some very special cases.




 Another approach would be to use a single IP address per router per compute
 node.  This avoids the multi-tenant issue mentioned above, at the cost of
 consuming more IP addresses, potentially one default SNAT IP address for
 each
 VM on the compute server (which is the case when every VM on the compute
 node
 is from a different tenant and/or using a different router).  At that point
 you might as well give each VM a floating IP.

 Hence the approach taken with the initial DVR implementation is to keep
 default SNAT as a centralized service.


In contrast to moving service to distributed CN, we should take care of
keeping them as centralized, especially FIP and FW. I know a lot of
customer prefer using some dedicated servers to act as network nodes, which
have more NICs(as external connection) than compute nodes, in these cases
FIP must be centralized instead of being distributed. As for FW, if we want
stateful ACL then DVR can do nothing, except that we think security group
is already some kind of FW.




 - Jack

  -Original Message-
  From: Zang MingJie [mailto:zealot0...@gmail.com]
  Sent: Wednesday, June 25, 2014 6:34 AM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [Neutron] DVR SNAT shortcut
 
  On Wed, Jun 25, 2014 at 5:42 PM, Yongsheng Gong gong...@unitedstack.com
 wrote:
   Hi,
   for each compute node to have SNAT to Internet, I think we have the
   drawbacks:
   1. SNAT is done in router, so each router will have to consume one
 public IP
   on each compute node, which is money.
 
  SNAT can save more ips than wasted on floating ips
 
   2. for each compute node to go out to Internet, the compute node will
 have
   one more NIC, which connect to physical switch, which is money too
  
 
  Floating ip also need a public NIC on br-ex. Also we can use a
  separate vlan to handle the network, so this is not a problem
 
   So personally, I like the design:
floating IPs and 1:N SNAT still use current network nodes, which will
 have
   HA solution enabled and we can have many l3 agents to host routers. but
   normal east/west traffic across compute nodes can use DVR.
 
  BTW, does HA implementation still active ? I haven't seen it has been
  touched for a while
 
  
   yong sheng gong
  
  
   On Wed, Jun 25, 2014 at 4:30 PM, Zang MingJie zealot0...@gmail.com
 wrote:
  
   Hi:
  
   In current DVR design, SNAT is north/south direction, but packets have
   to go west/east through the network node. If every compute node is
   assigned a public ip, is it technically able to improve SNAT packets
   w/o going through the network node ?
  
   SNAT versus floating ips, can save tons of public ips, in trade of
   introducing a single failure point, and limiting the bandwidth of the
   network node. If the SNAT performance problem can be solved, I'll
   encourage people to use SNAT over floating ips. unless the VM is
   serving a public service
  
   --
   Zang MingJie
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][group-based-policy] GP mapping driver

2014-06-20 Thread loy wolfe
GP should support applying policy on exist openstack deployment, so neither
implicit mapping nor intercepting works well.

maybe the explicit associating model is best: associate EPG with exist
neutron network object (policy automatically applied to all ports on it),
or with single port object (policy applied only on this port). By this way
GP will be more loosely coupled with Neutron core than the spec sample:
boot vm from a grand-new EP object, which need rewrite nova vif-plug, and
only support new deployment. It is suitable to put GP in orchestration
layer, etc, Heat, without bothering nova code. Boot vm from EPG can be
interpreted by ochestration with: 1) create port from network associated
with EGP; 2) boot nova from port.  In the future we may also need a unified
abstract policy template across compute/stroage/network.

And, it's not a good idea to intercept neutron port create api for
implicitly EP binding(I don't know if this has been removed now), for it
severely break the hierarchy relationship between GP and neutron core. the
link from GP wiki to an ODL page clearly shows that GP should be layered on
top of both neutron and ODL(1st graph).

http://webcache.googleusercontent.com/search?q=cache:https://wiki.opendaylight.org/view/Project_Proposals:Application_Policy_Plugin#Relationship_with_OpenStack.2FNeutron_Policy_Model
(this link has hidden all picture from this week so I have to give the
google cache)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev