Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-04-01 Thread Kevin Benton
It might be possible with iptables or ebtables rules, but it's not planned
that I'm aware of and it would be non-trivial to do. The current
implementation depends heavily on OVS flow rules.[1]

1. https://wiki.openstack.org/wiki/Neutron/DVR_L2_Agent

On Tue, Mar 31, 2015 at 10:37 PM, Dr. Jens Rosenboom j.rosenb...@x-ion.de
wrote:

 Am 01/04/15 um 04:10 schrieb Kevin Benton:

 It's worth pointing out here that the in-tree OVS solution controls
 traffic
 using iptables on regular bridges too. The difference between the two
 occurs when it comes to how traffic is separated into different networks.

 It's also worth noting that DVR requires OVS as well. If nobody is
 comfortable with OVS then they can't use DVR and they won't have parity
 with Nova network as far as floating IP resilience and performance is
 concerned.


 It was my understanding that the reason for this was that the first
 implementation for DVR was only done for OVS, probably because it is the
 default. Or is there some reason to assume that DVR also cannot be made to
 work with linuxbridge within Liberty?

 FWIW, I think I made some progress in getting [1] to work, though if
 someone could jump in and make a proper patch from my hack, that would be
 great.

 [1] https://review.openstack.org/168423


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-31 Thread Sean Dague


bin7JAxOZOqRl.bin
Description: PGP/MIME version identification


encrypted.asc
Description: OpenPGP encrypted message
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-31 Thread James Bottomley
On Fri, 2015-03-27 at 17:01 +, Tim Bell wrote:
 From the stats 
 (http://superuser.openstack.org/articles/openstack-user-survey-insights-november-2014),
 
 
 -43% of production clouds use OVS (the default for Neutron)
 
 -30% of production clouds are Nova network based
 
 -15% of production clouds use linux bridge
 
 There is therefore a significant share of the OpenStack production
 user community who are interested in a simple provider network linux
 bridge based solution.
  
 I think it is important to make an attractive cloud solution  where
 deployers can look at the balance of function and their skills and
 choose the appropriate combinations.
 
 Whether a simple network model should be the default is a different
 question to should there be a simple option. Personally, one of the
 most regular complaints I get is the steep learning curve for a new
 deployment. If we could make it so that people can do it as a series
 of steps (by making an path to add OVS) rather than a large leap, I
 think that would be attractive.

To be honest, there's a technology gap between the LinuxBridge and OVS
that cannot be filled.  We've found, since we sell technology to hosting
companies, that we got an immediate back reaction when we tried to
switch from a LinuxBridge to OVS in our Cloud Server product.  The
specific problem is that lots of hosting providers have heavily scripted
iptables and traffic control rules on the host side (i.e. on the bridge)
for controlling client networks which simply cannot be replicated in
OVS.  Telling the customers to rewrite everything in OpenFlow causes
incredulity and threats to pull the product.  No currently existing or
planned technology is there to fill this gap (the closest was google's
plan to migrate iptables rules to openflow, which died).  Our net
takeaway is that we have to provide both options for the foreseeable
future (scripting works to convert some use cases, but by no means
all ... and in our case not even a majority).

So the point of this for OpenStack is seeing this as a choice between
LinuxBridge and OVS is going to set up a false dichotomy.  Realistically
the future network technology has to support both, at least until the
trailing edge becomes more comfortable with SDN.

Moving neutron to ML2 instead of L2 helps isolate neutron from the
bridge technology, but it doesn't do anything to help the customer who
is currently poking at L2 to implement specific policies because they
have to care what the bridge technology is.  Telling the customer not to
poke the bridge isn't an option because they see the L2 plane as their
primary interface to diagnose and fix network issues ... which is why
they care about the bridge technology.

James



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-31 Thread Sean Dague
On 03/30/2015 05:58 PM, Sean M. Collins wrote:
 Quick update about OVS vs LB:
 Sean M. Collins pushed up a patch that runs CI on Tempest with LB:
 https://review.openstack.org/#/c/168423/

 So far it's failing pretty badly.
 
 
 I haven't had a chance to debug the failures - it is my hope that
 perhaps there are just more changes I need to make to DevStack to make
 LinuxBridge work correctly. If anyone is successfully using LinuxBridge
 with DevStack, please do review that patch and offer suggestions or
 share their local.conf file. :)

(applogies for previous encrypted email, enigmail somehow flagged
openstack-dev as encrypt by default for me.)

... Right, remember that getting a working neutron config requires a raft of
variables set correctly in the first place. Also, unlike n-net (which
owns setting up it's own network), neutron doesn't bootstrap it's own
bridges. Devstack has to specifically do ovs commands to create the
bridges for neutron otherwise it face plants. I expect that in this case
we're missing all that extra devstack initialization here, based on what
I saw in the failures.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-31 Thread Kevin Benton
It's worth pointing out here that the in-tree OVS solution controls traffic
using iptables on regular bridges too. The difference between the two
occurs when it comes to how traffic is separated into different networks.

It's also worth noting that DVR requires OVS as well. If nobody is
comfortable with OVS then they can't use DVR and they won't have parity
with Nova network as far as floating IP resilience and performance is
concerned.
On Mar 31, 2015 4:56 AM, James Bottomley 
james.bottom...@hansenpartnership.com wrote:

 On Fri, 2015-03-27 at 17:01 +, Tim Bell wrote:
  From the stats (
 http://superuser.openstack.org/articles/openstack-user-survey-insights-november-2014
 ),
 
 
  -43% of production clouds use OVS (the default for Neutron)
 
  -30% of production clouds are Nova network based
 
  -15% of production clouds use linux bridge
 
  There is therefore a significant share of the OpenStack production
  user community who are interested in a simple provider network linux
  bridge based solution.
 
  I think it is important to make an attractive cloud solution  where
  deployers can look at the balance of function and their skills and
  choose the appropriate combinations.
 
  Whether a simple network model should be the default is a different
  question to should there be a simple option. Personally, one of the
  most regular complaints I get is the steep learning curve for a new
  deployment. If we could make it so that people can do it as a series
  of steps (by making an path to add OVS) rather than a large leap, I
  think that would be attractive.

 To be honest, there's a technology gap between the LinuxBridge and OVS
 that cannot be filled.  We've found, since we sell technology to hosting
 companies, that we got an immediate back reaction when we tried to
 switch from a LinuxBridge to OVS in our Cloud Server product.  The
 specific problem is that lots of hosting providers have heavily scripted
 iptables and traffic control rules on the host side (i.e. on the bridge)
 for controlling client networks which simply cannot be replicated in
 OVS.  Telling the customers to rewrite everything in OpenFlow causes
 incredulity and threats to pull the product.  No currently existing or
 planned technology is there to fill this gap (the closest was google's
 plan to migrate iptables rules to openflow, which died).  Our net
 takeaway is that we have to provide both options for the foreseeable
 future (scripting works to convert some use cases, but by no means
 all ... and in our case not even a majority).

 So the point of this for OpenStack is seeing this as a choice between
 LinuxBridge and OVS is going to set up a false dichotomy.  Realistically
 the future network technology has to support both, at least until the
 trailing edge becomes more comfortable with SDN.

 Moving neutron to ML2 instead of L2 helps isolate neutron from the
 bridge technology, but it doesn't do anything to help the customer who
 is currently poking at L2 to implement specific policies because they
 have to care what the bridge technology is.  Telling the customer not to
 poke the bridge isn't an option because they see the L2 plane as their
 primary interface to diagnose and fix network issues ... which is why
 they care about the bridge technology.

 James



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-31 Thread Dr. Jens Rosenboom

Am 01/04/15 um 04:10 schrieb Kevin Benton:

It's worth pointing out here that the in-tree OVS solution controls traffic
using iptables on regular bridges too. The difference between the two
occurs when it comes to how traffic is separated into different networks.

It's also worth noting that DVR requires OVS as well. If nobody is
comfortable with OVS then they can't use DVR and they won't have parity
with Nova network as far as floating IP resilience and performance is
concerned.


It was my understanding that the reason for this was that the first 
implementation for DVR was only done for OVS, probably because it is the 
default. Or is there some reason to assume that DVR also cannot be made 
to work with linuxbridge within Liberty?


FWIW, I think I made some progress in getting [1] to work, though if 
someone could jump in and make a proper patch from my hack, that would 
be great.


[1] https://review.openstack.org/168423

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-30 Thread Jesse Pretorius
On 28 March 2015 at 00:41, Steve Wormley openst...@wormley.com wrote:

 So, I figured I'd weigh in on this as an employee of a nova-network using
 company.

 Nova-network allowed us to do a couple things simply.

 1. Attach openstack networks to our existing VLANs using our existing
 firewall/gateway and allow easy access to hardware such as database servers
 and storage on the same VLAN.
 2. Floating IPs managed at each compute node(multi-host) and via the
 standard nova API calls.
 3. Access to our instances via their private IP addresses from inside the
 company(see 1)

 Our forklift replacement to neutron(as we know we can't 'migrate') is at
 the following state.
 2 meant we can't use pure provider VLAN networks so we had to wait for DVR
 VLAN support to work.


I'm always confused when I see operators mention that provider VLANs can't
be used in a Neutron configuration. While at my former employer we had that
setup with Grizzly, and also note that any instance attached to a VLAN
tagged tenant network did not go via the L3 agent... the traffic was tagged
and sent directly from the compute node onto the VLAN.

All we had to do to make this work was to allow VLAN tagged networks and
the cloud admin had to setup the provider network with the appropriate VLAN
tag.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-30 Thread Assaf Muller


- Original Message -
 On 03/27/2015 11:48 AM, Assaf Muller wrote:
  
  
  - Original Message -
  On 03/27/2015 05:22 AM, Thierry Carrez wrote:
  snip
  Part of it is corner (or simplified) use cases not being optimally
  served by Neutron, and I think Neutron could more aggressively address
  those. But the other part is ignorance and convenience: that Neutron
  thing is a scary beast, last time I looked into it I couldn't make sense
  of it, and nova-network just works for me.
 
  That is why during the Ops Summit we discussed coming up with a
  migration guide that would explain the various ways you can use Neutron
  to cover nova-network use cases, demystify a few dark areas, and outline
  the step-by-step manual process you can go through to migrate from one
  to the other.
 
  We found a dev/ops volunteer for writing that migration guide but he was
  unfortunately not allowed to spend time on this. I heard we have new
  volunteers, but I'll let them announce what their plans are, rather than
  put words in their mouth.
 
  This migration guide can happen even if we follow the nova-net spinout
  plan (for the few who want to migrate to Neutron), so this is a
  complementary solution rather than an alternative. Personally I doubt
  there would suddenly be enough people interested in nova-net development
  to successfully spin it out and maintain it. I also agree with Russell
  that long-term fragmentation at this layer of the stack is generally not
  desirable.
 
  I think if you boil everything down, you end up with 3 really important
  differences.
 
  1) neutron is a fleet of services (it's very micro service) and every
  service requires multiple and different config files. Just configuring
  the fleet is a beast if it not devstack (and even if it is)
 
  2) neutron assumes a primary interesting thing to you is tenant secured
  self service networks. This is actually explicitly not interesting to a
  lot of deploys for policy, security, political reasons/restrictions.
 
  3) neutron open source backend defaults to OVS (largely because #2). OVS
  is it's own complicated engine that you need to learn to debug. While
  Linux bridge has challenges, it's also something that anyone who's
  worked with Linux  Virtualization for the last 10 years has some
  experience with.
 
  (also, the devstack setup code for neutron is a rats nest, as it was
  mostly not paid attention to. This means it's been 0 help in explaining
  anything to people trying to do neutron. For better or worse devstack is
  our executable manual for a lot of these things)
 
  so that being said, I think we need to talk about minimum viable
  neutron as a model and figure out how far away that is from n-net. This
  week at the QA Sprint, Dean, Sean Collins, and I have spent some time
  hashing it out, hopefully with something to show the end of the week.
  This will be the new devstack code for neutron (the old lib/neutron is
  moved to lib/neutron-legacy).
 
  Default setup will be provider networks (which means no tenant
  isolation). For that you should only need neutron-api, -dhcp, and -l2.
  So #1 is made a bunch better. #2 not a thing at all. And for #3 we'd
  like to revert back to linux bridge for the base case (though first code
  will probably be OVS because that's the happy path today).
 
  
  Looking at the latest user survey, OVS looks to be 3 times as popular as
  Linux bridge for production deployments. Having LB as the default seems
  like an odd choice. You also wouldn't want to change the default before
  LB is tested at the gate.
 
 Sure, actually testing defaults is presumed here. I didn't think it
 needed to be called out separately.

Quick update about OVS vs LB:
Sean M. Collins pushed up a patch that runs CI on Tempest with LB:
https://review.openstack.org/#/c/168423/

So far it's failing pretty badly.

 
   -Sean
 
 --
 Sean Dague
 http://dague.net
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-30 Thread Anita Kuno
On 03/30/2015 09:25 AM, Assaf Muller wrote:
 
 
 - Original Message -
 On 03/27/2015 11:48 AM, Assaf Muller wrote:


 - Original Message -
 On 03/27/2015 05:22 AM, Thierry Carrez wrote:
 snip
 Part of it is corner (or simplified) use cases not being optimally
 served by Neutron, and I think Neutron could more aggressively address
 those. But the other part is ignorance and convenience: that Neutron
 thing is a scary beast, last time I looked into it I couldn't make sense
 of it, and nova-network just works for me.

 That is why during the Ops Summit we discussed coming up with a
 migration guide that would explain the various ways you can use Neutron
 to cover nova-network use cases, demystify a few dark areas, and outline
 the step-by-step manual process you can go through to migrate from one
 to the other.

 We found a dev/ops volunteer for writing that migration guide but he was
 unfortunately not allowed to spend time on this. I heard we have new
 volunteers, but I'll let them announce what their plans are, rather than
 put words in their mouth.

 This migration guide can happen even if we follow the nova-net spinout
 plan (for the few who want to migrate to Neutron), so this is a
 complementary solution rather than an alternative. Personally I doubt
 there would suddenly be enough people interested in nova-net development
 to successfully spin it out and maintain it. I also agree with Russell
 that long-term fragmentation at this layer of the stack is generally not
 desirable.

 I think if you boil everything down, you end up with 3 really important
 differences.

 1) neutron is a fleet of services (it's very micro service) and every
 service requires multiple and different config files. Just configuring
 the fleet is a beast if it not devstack (and even if it is)

 2) neutron assumes a primary interesting thing to you is tenant secured
 self service networks. This is actually explicitly not interesting to a
 lot of deploys for policy, security, political reasons/restrictions.

 3) neutron open source backend defaults to OVS (largely because #2). OVS
 is it's own complicated engine that you need to learn to debug. While
 Linux bridge has challenges, it's also something that anyone who's
 worked with Linux  Virtualization for the last 10 years has some
 experience with.

 (also, the devstack setup code for neutron is a rats nest, as it was
 mostly not paid attention to. This means it's been 0 help in explaining
 anything to people trying to do neutron. For better or worse devstack is
 our executable manual for a lot of these things)

 so that being said, I think we need to talk about minimum viable
 neutron as a model and figure out how far away that is from n-net. This
 week at the QA Sprint, Dean, Sean Collins, and I have spent some time
 hashing it out, hopefully with something to show the end of the week.
 This will be the new devstack code for neutron (the old lib/neutron is
 moved to lib/neutron-legacy).

 Default setup will be provider networks (which means no tenant
 isolation). For that you should only need neutron-api, -dhcp, and -l2.
 So #1 is made a bunch better. #2 not a thing at all. And for #3 we'd
 like to revert back to linux bridge for the base case (though first code
 will probably be OVS because that's the happy path today).


 Looking at the latest user survey, OVS looks to be 3 times as popular as
 Linux bridge for production deployments. Having LB as the default seems
 like an odd choice. You also wouldn't want to change the default before
 LB is tested at the gate.

 Sure, actually testing defaults is presumed here. I didn't think it
 needed to be called out separately.
 
 Quick update about OVS vs LB:
 Sean M. Collins pushed up a patch that runs CI on Tempest with LB:
 https://review.openstack.org/#/c/168423/
 
 So far it's failing pretty badly.
That is the nature of development.

Let's also note that is patchset 1 of a patch marked work in progress.

If we start to make decisions about whether or not a direction is a
reasonable direction on a patch which is expected to fail this early in
the development process we serious injure our ability to foster development.

Please understand and respect the development process prior to expecting
others to make decisions prematurely.

Thank you,
Anita.
 

  -Sean

 --
 Sean Dague
 http://dague.net


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-30 Thread Anita Kuno
On 03/26/2015 06:31 PM, Michael Still wrote:
 Hi,
 
 I thought it would be a good idea to send out a status update for the
 migration from nova-network to Neutron, as there hasn't been as much
 progress as we'd hoped for in Kilo. There are a few issues which have
 been slowing progress down.
 
 First off, creating an all encompassing turn key upgrade is probably
 not possible. This was also never a goal of this effort -- to quote
 the spec for this work, Consequently, the process described here is
 not a “one size fits all” automated push-button tool but a series of
 steps that should be obvious to automate and customise to meet local
 needs [1]. The variety of deployment and configuration options
 available makes a turn key migration very hard to write, and possibly
 impossible to test. We therefore have opted for writing migration
 tools, which allow operators to plug components together in the way
 that makes sense for their deployment and then migrate using those.
 
 However, talking to operators at the Ops Summit, is has become clear
 that some operators simply aren't interested in moving to Neutron --
 largely because they either aren't interested in tenant networks, or
 have corporate network environments that make deploying Neutron very
 hard. So, even if we provide migration tools, it is still likely that
 we will end up with loyal nova-network users who aren't interested in
 moving. From the Nova perspective, the end goal of all of this effort
 is to delete the nova-network code, and if we can't do that because
 some people simply don't want to move, then what is gained by putting
 a lot of effort into migration tooling?
 
 Therefore, there is some talk about spinning nova-network out into its
 own project where it could continue to live on and be better
 maintained than the current Nova team is able to do. However, this is
 a relatively new idea and we haven't had a chance to determine how
 feasible it is given where we are in the release cycle. I assume that
 if we did this, we would need to find a core team passionate about
 maintaining nova-network, and we would still need to provide some
 migration tooling for operators who are keen to move to Neutron.
 However, that migration tooling would be less critical than it is now.
 
 Unfortunately, this has all come to a head at a time when the Nova
 team is heads down getting the Kilo release out the door. We simply
 don't have the time at the moment to properly consider these issues.
 So, I'd like to ask for us to put a pause on this current work until
 we have Kilo done. These issues are complicated and important, so I
 feel we shouldn't rush them at a time we are distracted.
 
 Finally, I want to reinforce that the position we currently find
 ourselves in isn't because of a lack of effort. Oleg, Angus and Anita
 have all worked very hard on this problem during Kilo, and it is
 frustrating that we haven't managed to find a magic bullet to solve
 all of these problems. I want to personally thank each of them for
 their efforts this cycle on this relatively thankless task.
 
 I'd appreciate other's thoughts on these issues.
 
 Michael
 
 
 1: 
 http://specs.openstack.org/openstack/neutron-specs/specs/kilo/migration-from-nova-net.html#impact-limitations
 
 
Thank you, Michael, for this post.

It is clear that we need some additional discussion and agreement here,
and I welcome the discussion.

It is disheartening to try to create an implementation that won't
achieve the goal.

I too would like to thank everyone who has worked hard to try to create
a migration path with the understanding we had been operating with, my
thanks to each of you.

I have placed the weekly nova-net to neutron migration meeting on
hold[0], pending the outcome of this or other discussions and some
additional direction.

Thank you to all participating,
Anita.

[0] https://wiki.openstack.org/wiki/Meetings/Nova-nettoNeutronMigration

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-30 Thread Anita Kuno
On 03/26/2015 08:58 PM, Russell Bryant wrote:
 On 03/26/2015 06:31 PM, Michael Still wrote:
 Hi,

 I thought it would be a good idea to send out a status update for the
 migration from nova-network to Neutron, as there hasn't been as much
 progress as we'd hoped for in Kilo. There are a few issues which have
 been slowing progress down.
 
 Thanks for writing up the status!
 
 First off, creating an all encompassing turn key upgrade is probably
 not possible. This was also never a goal of this effort -- to quote
 the spec for this work, Consequently, the process described here is
 not a “one size fits all” automated push-button tool but a series of
 steps that should be obvious to automate and customise to meet local
 needs [1]. The variety of deployment and configuration options
 available makes a turn key migration very hard to write, and possibly
 impossible to test. We therefore have opted for writing migration
 tools, which allow operators to plug components together in the way
 that makes sense for their deployment and then migrate using those.
 
 Yes, I'm quite convinced that it will end up being a fairly custom
 effort for virtually all deployments complex enough where just starting
 over or cold migration isn't an option.
 
 However, talking to operators at the Ops Summit, is has become clear
 that some operators simply aren't interested in moving to Neutron --
 largely because they either aren't interested in tenant networks, or
 have corporate network environments that make deploying Neutron very
 hard. 
 
 I totally get point #1: nova-network has less features, but I don't
 need the rest, and nova-network is rock solid for me.
 
 I'm curious about the second point about Neutron being more difficult to
 deploy than nova-network.  That's interesting because it actually seems
 like Neutron is more flexible when it comes to integration with existing
 networks.  Do you know any more details?  If not, perhaps those with
 that concern could fill in with some detail here?
 
 So, even if we provide migration tools, it is still likely that
 we will end up with loyal nova-network users who aren't interested in
 moving. From the Nova perspective, the end goal of all of this effort
 is to delete the nova-network code, and if we can't do that because
 some people simply don't want to move, then what is gained by putting
 a lot of effort into migration tooling?
 
 To me it comes down to the reasons people don't want to move.  I'd like
 to dig into exactly why people don't want to use Neutron.  If there are
 legitimate reasons why nova-network will work better, then Neutron has
 not met parity and we're certainly not ready to deprecate nova-network.
 
 I still think getting down to a single networking project should be the
 end goal.  The confusion around networking choices has been detrimental
 to OpenStack.
I heartily agree.

Here is my problem. I am getting the feeling from the big tent
discussions (now this could be my fault since I don't know as it is in
the proposal or just the stuff people are making up about it) that we
are allowing more than one networking project in OpenStack. I have been
disappointed with that impression but that has been the impression I
have gotten.

I'm glad to hear you have a different perspective on this, Russell, and
would just like to clarify this point.

Are we saying that OpenStack has one networking option?

Thanks,
Anita.
 
 Therefore, there is some talk about spinning nova-network out into its
 own project where it could continue to live on and be better
 maintained than the current Nova team is able to do. However, this is
 a relatively new idea and we haven't had a chance to determine how
 feasible it is given where we are in the release cycle. I assume that
 if we did this, we would need to find a core team passionate about
 maintaining nova-network, and we would still need to provide some
 migration tooling for operators who are keen to move to Neutron.
 However, that migration tooling would be less critical than it is now.
 
 From a purely technical perspective, it seems like quite a bit of work.
  It reminds me of we'll just split the scheduler out, and we see how
 long that's taking in practice.  I really think all of that effort is
 better spent just improving Neutron.
 
 From a community perspective, I'm not thrilled about long term
 fragmentation for such a fundamental piece of our stack.  So, I'd really
 like to dig into the current state of gaps between Neutron and
 nova-network.  If there were no real gaps, there would be no sensible
 argument to keep the 2nd option.
 
 Unfortunately, this has all come to a head at a time when the Nova
 team is heads down getting the Kilo release out the door. We simply
 don't have the time at the moment to properly consider these issues.
 So, I'd like to ask for us to put a pause on this current work until
 we have Kilo done. These issues are complicated and important, so I
 feel we shouldn't rush them at a time we are 

Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-30 Thread Assaf Muller


- Original Message -
 On 03/30/2015 09:25 AM, Assaf Muller wrote:
  
  
  - Original Message -
  On 03/27/2015 11:48 AM, Assaf Muller wrote:
 
 
  - Original Message -
  On 03/27/2015 05:22 AM, Thierry Carrez wrote:
  snip
  Part of it is corner (or simplified) use cases not being optimally
  served by Neutron, and I think Neutron could more aggressively address
  those. But the other part is ignorance and convenience: that Neutron
  thing is a scary beast, last time I looked into it I couldn't make
  sense
  of it, and nova-network just works for me.
 
  That is why during the Ops Summit we discussed coming up with a
  migration guide that would explain the various ways you can use Neutron
  to cover nova-network use cases, demystify a few dark areas, and
  outline
  the step-by-step manual process you can go through to migrate from one
  to the other.
 
  We found a dev/ops volunteer for writing that migration guide but he
  was
  unfortunately not allowed to spend time on this. I heard we have new
  volunteers, but I'll let them announce what their plans are, rather
  than
  put words in their mouth.
 
  This migration guide can happen even if we follow the nova-net spinout
  plan (for the few who want to migrate to Neutron), so this is a
  complementary solution rather than an alternative. Personally I doubt
  there would suddenly be enough people interested in nova-net
  development
  to successfully spin it out and maintain it. I also agree with Russell
  that long-term fragmentation at this layer of the stack is generally
  not
  desirable.
 
  I think if you boil everything down, you end up with 3 really important
  differences.
 
  1) neutron is a fleet of services (it's very micro service) and every
  service requires multiple and different config files. Just configuring
  the fleet is a beast if it not devstack (and even if it is)
 
  2) neutron assumes a primary interesting thing to you is tenant secured
  self service networks. This is actually explicitly not interesting to a
  lot of deploys for policy, security, political reasons/restrictions.
 
  3) neutron open source backend defaults to OVS (largely because #2). OVS
  is it's own complicated engine that you need to learn to debug. While
  Linux bridge has challenges, it's also something that anyone who's
  worked with Linux  Virtualization for the last 10 years has some
  experience with.
 
  (also, the devstack setup code for neutron is a rats nest, as it was
  mostly not paid attention to. This means it's been 0 help in explaining
  anything to people trying to do neutron. For better or worse devstack is
  our executable manual for a lot of these things)
 
  so that being said, I think we need to talk about minimum viable
  neutron as a model and figure out how far away that is from n-net. This
  week at the QA Sprint, Dean, Sean Collins, and I have spent some time
  hashing it out, hopefully with something to show the end of the week.
  This will be the new devstack code for neutron (the old lib/neutron is
  moved to lib/neutron-legacy).
 
  Default setup will be provider networks (which means no tenant
  isolation). For that you should only need neutron-api, -dhcp, and -l2.
  So #1 is made a bunch better. #2 not a thing at all. And for #3 we'd
  like to revert back to linux bridge for the base case (though first code
  will probably be OVS because that's the happy path today).
 
 
  Looking at the latest user survey, OVS looks to be 3 times as popular as
  Linux bridge for production deployments. Having LB as the default seems
  like an odd choice. You also wouldn't want to change the default before
  LB is tested at the gate.
 
  Sure, actually testing defaults is presumed here. I didn't think it
  needed to be called out separately.
  
  Quick update about OVS vs LB:
  Sean M. Collins pushed up a patch that runs CI on Tempest with LB:
  https://review.openstack.org/#/c/168423/
  
  So far it's failing pretty badly.
 That is the nature of development.
 
 Let's also note that is patchset 1 of a patch marked work in progress.
 
 If we start to make decisions about whether or not a direction is a
 reasonable direction on a patch which is expected to fail this early in
 the development process we serious injure our ability to foster development.
 
 Please understand and respect the development process prior to expecting
 others to make decisions prematurely.
 

I was providing a status report, nothing more.

 Thank you,
 Anita.
  
 
 -Sean
 
  --
  Sean Dague
  http://dague.net
 
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  
  __
  OpenStack Development Mailing List (not for usage questions)

Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-30 Thread Russell Bryant
On 03/30/2015 10:34 AM, Anita Kuno wrote:
 On 03/26/2015 08:58 PM, Russell Bryant wrote:
 To me it comes down to the reasons people don't want to move.  I'd like
 to dig into exactly why people don't want to use Neutron.  If there are
 legitimate reasons why nova-network will work better, then Neutron has
 not met parity and we're certainly not ready to deprecate nova-network.

 I still think getting down to a single networking project should be the
 end goal.  The confusion around networking choices has been detrimental
 to OpenStack.
 I heartily agree.
 
 Here is my problem. I am getting the feeling from the big tent
 discussions (now this could be my fault since I don't know as it is in
 the proposal or just the stuff people are making up about it) that we
 are allowing more than one networking project in OpenStack. I have been
 disappointed with that impression but that has been the impression I
 have gotten.
 
 I'm glad to hear you have a different perspective on this, Russell, and
 would just like to clarify this point.
 
 Are we saying that OpenStack has one networking option?

I wouldn't say that exactly.  We clearly have two today.  :-)

I don't think anyone intended to have two for as long as we have, and I
feel that has been detrimental to the OpenStack mission.  I'm very
thankful for the ongoing efforts to rectify that situation.

My general feeling about overlap in OpenStack is that it's more costly
the lower we go in the stack.  If we think about the base compute set
of projects (like Nova, Glance, Neutron, Keystone, Cinder), I feel we
should resist overlap there more strongly than we might at the higher
layers.

I think lacking consensus around a networking direction is harmful to
our mission.  I will not say a new networking API should never happen,
but the bar should be high.

In fact, this very debate is happening right now on whether or not the
group based policy project should be accepted as an OpenStack project:

https://review.openstack.org/#/c/161902/

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-30 Thread Steve Wormley
On Mon, Mar 30, 2015 at 4:49 AM, Jesse Pretorius jesse.pretor...@gmail.com
wrote:

 On 28 March 2015 at 00:41, Steve Wormley openst...@wormley.com wrote:

 2. Floating IPs managed at each compute node(multi-host) and via the
 standard nova API calls.



 2 meant we can't use pure provider VLAN networks so we had to wait for DVR
 VLAN support to work.


 I'm always confused when I see operators mention that provider VLANs can't
 be used in a Neutron configuration. While at my former employer we had that
 setup with Grizzly, and also note that any instance attached to a VLAN
 tagged tenant network did not go via the L3 agent... the traffic was tagged
 and sent directly from the compute node onto the VLAN.

 All we had to do to make this work was to allow VLAN tagged networks and
 the cloud admin had to setup the provider network with the appropriate VLAN
 tag.

As you say, provider networks and VLANs work fine. Provider networks, VLANs
and Openstack managed Floating IP addresses for the same instances do not.

-Steve wormley
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-30 Thread Anita Kuno
On 03/30/2015 12:35 PM, Russell Bryant wrote:
 On 03/30/2015 10:34 AM, Anita Kuno wrote:
 On 03/26/2015 08:58 PM, Russell Bryant wrote:
 To me it comes down to the reasons people don't want to move.  I'd like
 to dig into exactly why people don't want to use Neutron.  If there are
 legitimate reasons why nova-network will work better, then Neutron has
 not met parity and we're certainly not ready to deprecate nova-network.

 I still think getting down to a single networking project should be the
 end goal.  The confusion around networking choices has been detrimental
 to OpenStack.
 I heartily agree.

 Here is my problem. I am getting the feeling from the big tent
 discussions (now this could be my fault since I don't know as it is in
 the proposal or just the stuff people are making up about it) that we
 are allowing more than one networking project in OpenStack. I have been
 disappointed with that impression but that has been the impression I
 have gotten.

 I'm glad to hear you have a different perspective on this, Russell, and
 would just like to clarify this point.

 Are we saying that OpenStack has one networking option?
 
 I wouldn't say that exactly.  We clearly have two today.  :-)
 
 I don't think anyone intended to have two for as long as we have, and I
 feel that has been detrimental to the OpenStack mission.  I'm very
 thankful for the ongoing efforts to rectify that situation.
 
 My general feeling about overlap in OpenStack is that it's more costly
 the lower we go in the stack.  If we think about the base compute set
 of projects (like Nova, Glance, Neutron, Keystone, Cinder), I feel we
 should resist overlap there more strongly than we might at the higher
 layers.
 
 I think lacking consensus around a networking direction is harmful to
 our mission.  I will not say a new networking API should never happen,
 but the bar should be high.
 
 In fact, this very debate is happening right now on whether or not the
 group based policy project should be accepted as an OpenStack project:
 
 https://review.openstack.org/#/c/161902/
 
Thank you, Russell. I agree with you and I am grateful that you took the
time to spell it out for the mailing list.

Lack of clarity hurts our users, every decision we make should keep our
users best interests in mind going forward, as you outline in your reply.

Thanks Russell,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-30 Thread Steve Wormley
On Sun, Mar 29, 2015 at 6:45 AM, Kevin Benton blak...@gmail.com wrote:

 Does the decision about the floating IP have to be based on the use of the
 private IP in the original destination, or could you get by with rules on
 the L3 agent to avoid NAT just based on the destination being in a
 configured set of CIDRs?

 If you could get by with the latter it would be a much simpler problem to
 solve. However, I suspect you will want the former to be able to connect to
 floating IPs internally as well.

That's one issue. Having systems like monitoring accessing both addresses.
The other, like many other large organizations, is that we have a fairly
large number of disjoint address spaces between all the groups accessing
our cloud. So trying to create and maintain that sort of list, short of a
routing protocol feed, is not easy.

-Steve Wormley
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-30 Thread Sean M. Collins
 Quick update about OVS vs LB:
 Sean M. Collins pushed up a patch that runs CI on Tempest with LB:
 https://review.openstack.org/#/c/168423/
 
 So far it's failing pretty badly.


I haven't had a chance to debug the failures - it is my hope that
perhaps there are just more changes I need to make to DevStack to make
LinuxBridge work correctly. If anyone is successfully using LinuxBridge
with DevStack, please do review that patch and offer suggestions or
share their local.conf file. :)

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-29 Thread Kevin Benton
Does the decision about the floating IP have to be based on the use of the
private IP in the original destination, or could you get by with rules on
the L3 agent to avoid NAT just based on the destination being in a
configured set of CIDRs?

If you could get by with the latter it would be a much simpler problem to
solve. However, I suspect you will want the former to be able to connect to
floating IPs internally as well.
On Mar 28, 2015 12:24 PM, Steve Wormley openst...@wormley.com wrote:

 On Sat, Mar 28, 2015 at 1:57 AM, Kevin Benton blak...@gmail.com wrote:

 You want floating IPs at each compute node, and DVR with VLAN support got
 you close. Are the floating IPs okay being on a different network/VLAN?


 I should clarify, the floating IPs are publicly routable addresses, as
 opposed to instances on RFC1918 space. This is the 'standard' neutron and
 nova-network floating IP model. Nothing really special there.

 Which address do you expect the source to be when an instance communicates
 outside of its network (no existing connection state)? You mentioned having
 the L3 agent ARP for a different gateway, do you still want the floating IP
 translation to happen before that? Is there any case where it should ever
 be via the private address?


 Instances with assigned floating IP addresses initiating connections are
 NATted and go out the floating IP. In reality, we special case all RFC 1918
 space to not trigger the floating IP.


 The header mangling is to make up for the fact that traffic coming to the
 floating IP gets translated by the L3 agent before it makes it to the
 instance so there is no way to distinguish whether the floating IP or
 private IP was targeted. Is that correct?


 Basically. Traffic coming in on a tenant vlan to the instance is mangled
 by the first OVS rule it hits to indicate it came in via a private
 interface/subnet/VLAN. It then hits iptables on the instance Linux bridge
 with turns the header bits onto a conntrack mark. The outbound packets from
 the instance for connection gets the conntrack mark changed back to a
 header bit. If this packet then hits iptables in the qrouter namespace
 where it's turned into a normal fwmark/nfmark. That mark is used to disable
 NAT for the packet and flags the ip route rules to not send the packet to
 the FIP namespace but to instead let it flow normally.

 Of course, all this horribleness is because the veth drivers in Linux wipe
 the SKB Mark(fwmark/nfmark) so I have no way to persistently track a packet
 across the OVS-veth-Linux Bridge boundaries.

 -Steve Wormley

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-28 Thread Steve Wormley
On Sat, Mar 28, 2015 at 1:57 AM, Kevin Benton blak...@gmail.com wrote:

 You want floating IPs at each compute node, and DVR with VLAN support got
 you close. Are the floating IPs okay being on a different network/VLAN?


I should clarify, the floating IPs are publicly routable addresses, as
opposed to instances on RFC1918 space. This is the 'standard' neutron and
nova-network floating IP model. Nothing really special there.

Which address do you expect the source to be when an instance communicates
 outside of its network (no existing connection state)? You mentioned having
 the L3 agent ARP for a different gateway, do you still want the floating IP
 translation to happen before that? Is there any case where it should ever
 be via the private address?


Instances with assigned floating IP addresses initiating connections are
NATted and go out the floating IP. In reality, we special case all RFC 1918
space to not trigger the floating IP.


 The header mangling is to make up for the fact that traffic coming to the
 floating IP gets translated by the L3 agent before it makes it to the
 instance so there is no way to distinguish whether the floating IP or
 private IP was targeted. Is that correct?


Basically. Traffic coming in on a tenant vlan to the instance is mangled by
the first OVS rule it hits to indicate it came in via a private
interface/subnet/VLAN. It then hits iptables on the instance Linux bridge
with turns the header bits onto a conntrack mark. The outbound packets from
the instance for connection gets the conntrack mark changed back to a
header bit. If this packet then hits iptables in the qrouter namespace
where it's turned into a normal fwmark/nfmark. That mark is used to disable
NAT for the packet and flags the ip route rules to not send the packet to
the FIP namespace but to instead let it flow normally.

Of course, all this horribleness is because the veth drivers in Linux wipe
the SKB Mark(fwmark/nfmark) so I have no way to persistently track a packet
across the OVS-veth-Linux Bridge boundaries.

-Steve Wormley
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-28 Thread Fox, Kevin M
It sounds like you want to be able to allocate and manage floating ips out of a 
neutron subnet and attach them to vms in that same subnet? No router needed? 
Sounds useful.

Would probably need different quotas. Since they arent public floating ips.

Maybe floating ip quotas should be seperated by subnet id? This would help 
anyway when you have multiple external networks.

Thanks,
Kevin


From: Kevin Benton
Sent: Saturday, March 28, 2015 1:57:23 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to 
Neutron migration work

This is a use case that we probably need a better equivalent of on the Neutron 
side. It would be great if you could clarify a few things to make sure I 
understand it correctly.

You want floating IPs at each compute node, and DVR with VLAN support got you 
close. Are the floating IPs okay being on a different network/VLAN?

Which address do you expect the source to be when an instance communicates 
outside of its network (no existing connection state)? You mentioned having the 
L3 agent ARP for a different gateway, do you still want the floating IP 
translation to happen before that? Is there any case where it should ever be 
via the private address?

The header mangling is to make up for the fact that traffic coming to the 
floating IP gets translated by the L3 agent before it makes it to the instance 
so there is no way to distinguish whether the floating IP or private IP was 
targeted. Is that correct?

Thanks for posting this.

Cheers,
Kevin Benton

On Fri, Mar 27, 2015 at 5:41 PM, Steve Wormley 
openst...@wormley.commailto:openst...@wormley.com wrote:
So, I figured I'd weigh in on this as an employee of a nova-network using 
company.

Nova-network allowed us to do a couple things simply.

1. Attach openstack networks to our existing VLANs using our existing 
firewall/gateway and allow easy access to hardware such as database servers and 
storage on the same VLAN.
2. Floating IPs managed at each compute node(multi-host) and via the standard 
nova API calls.
3. Access to our instances via their private IP addresses from inside the 
company(see 1)

Our forklift replacement to neutron(as we know we can't 'migrate') is at the 
following state.
2 meant we can't use pure provider VLAN networks so we had to wait for DVR VLAN 
support to work.

Now that that works, I had to go in and convince Neutron to let me configure my 
own gateways as the next hop instead of the central SNAT gateway's assigned IP. 
This also required making it so the distributed L3 agents could do ARP for the 
'real' gateway on the subnet.

Item 3 works fine until a floating IP is assigned. For nova-network this was 
trivial connection tracked routing sending packets that reached an instance via 
its private IP back out the private VLAN and everything else via the assigned 
public IP.

Neutron, OVS and the various veth connections between them means I can't use 
packet marking between instances and the router NS, between that and a whole 
bunch of other things we had to borrow some IP header bits to track where a 
packet came in so if a response to that connection hit the DVR router it could 
be sent back out the private network.

And for the next week I get to try and make this all python code so we can 
actually finally test it without hand crafted iptables and OVS rules.

For our model most of the Neutron features are wasted, but as we've been told 
that nova-network is going away we're going to figure out how to make Neutron 
work going forward.

-Steve Wormley
Not really speaking for my employer

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-27 Thread Thierry Carrez
Kyle Mestery wrote:
 On Thu, Mar 26, 2015 at 7:58 PM, Russell Bryant rbry...@redhat.com
 mailto:rbry...@redhat.com wrote:
 
 On 03/26/2015 06:31 PM, Michael Still wrote:
  Hi,
 
  I thought it would be a good idea to send out a status update for the
  migration from nova-network to Neutron, as there hasn't been as much
  progress as we'd hoped for in Kilo. There are a few issues which have
  been slowing progress down.
 
 Thanks for writing up the status!
 
  First off, creating an all encompassing turn key upgrade is probably
  not possible. This was also never a goal of this effort -- to quote
  the spec for this work, Consequently, the process described here is
  not a “one size fits all” automated push-button tool but a series of
  steps that should be obvious to automate and customise to meet local
  needs [1]. The variety of deployment and configuration options
  available makes a turn key migration very hard to write, and possibly
  impossible to test. We therefore have opted for writing migration
  tools, which allow operators to plug components together in the way
  that makes sense for their deployment and then migrate using those.
 
 Yes, I'm quite convinced that it will end up being a fairly custom
 effort for virtually all deployments complex enough where just starting
 over or cold migration isn't an option.
 
  However, talking to operators at the Ops Summit, is has become clear
  that some operators simply aren't interested in moving to Neutron --
  largely because they either aren't interested in tenant networks, or
  have corporate network environments that make deploying Neutron very
  hard.
 
 I totally get point #1: nova-network has less features, but I don't
 need the rest, and nova-network is rock solid for me.
 
 I'm curious about the second point about Neutron being more difficult to
 deploy than nova-network.  That's interesting because it actually seems
 like Neutron is more flexible when it comes to integration with existing
 networks.  Do you know any more details?  If not, perhaps those with
 that concern could fill in with some detail here?
 
  So, even if we provide migration tools, it is still likely that
  we will end up with loyal nova-network users who aren't interested in
  moving. From the Nova perspective, the end goal of all of this effort
  is to delete the nova-network code, and if we can't do that because
  some people simply don't want to move, then what is gained by putting
  a lot of effort into migration tooling?
 
 To me it comes down to the reasons people don't want to move.  I'd like
 to dig into exactly why people don't want to use Neutron.  If there are
 legitimate reasons why nova-network will work better, then Neutron has
 not met parity and we're certainly not ready to deprecate nova-network.
 
 I still think getting down to a single networking project should be the
 end goal.  The confusion around networking choices has been detrimental
 to OpenStack.
 
  Therefore, there is some talk about spinning nova-network out into its
  own project where it could continue to live on and be better
  maintained than the current Nova team is able to do. However, this is
  a relatively new idea and we haven't had a chance to determine how
  feasible it is given where we are in the release cycle. I assume that
  if we did this, we would need to find a core team passionate about
  maintaining nova-network, and we would still need to provide some
  migration tooling for operators who are keen to move to Neutron.
  However, that migration tooling would be less critical than it is now.
 
 From a purely technical perspective, it seems like quite a bit of work.
  It reminds me of we'll just split the scheduler out, and we see how
 long that's taking in practice.  I really think all of that effort is
 better spent just improving Neutron.
 
 From a community perspective, I'm not thrilled about long term
 fragmentation for such a fundamental piece of our stack.  So, I'd really
 like to dig into the current state of gaps between Neutron and
 nova-network.  If there were no real gaps, there would be no sensible
 argument to keep the 2nd option.
 
 I agree with Russell here. After talking to a few folks, my sense is
 there is still a misunderstanding between people running nova-network
 and those developing Neutron. I realize not everyone wants tenant
 networks, and I think we can look at the use case for that and see how
 to map it to what Neutron has, and fill in any missing gaps. There are
 some discussions started already to see how we can fill those gaps.

Part of it is corner (or simplified) use cases not being optimally
served by Neutron, and I think Neutron could more aggressively address
those. But the other part is 

Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-27 Thread Sean Dague
On 03/27/2015 05:22 AM, Thierry Carrez wrote:
snip
 Part of it is corner (or simplified) use cases not being optimally
 served by Neutron, and I think Neutron could more aggressively address
 those. But the other part is ignorance and convenience: that Neutron
 thing is a scary beast, last time I looked into it I couldn't make sense
 of it, and nova-network just works for me.
 
 That is why during the Ops Summit we discussed coming up with a
 migration guide that would explain the various ways you can use Neutron
 to cover nova-network use cases, demystify a few dark areas, and outline
 the step-by-step manual process you can go through to migrate from one
 to the other.
 
 We found a dev/ops volunteer for writing that migration guide but he was
 unfortunately not allowed to spend time on this. I heard we have new
 volunteers, but I'll let them announce what their plans are, rather than
 put words in their mouth.
 
 This migration guide can happen even if we follow the nova-net spinout
 plan (for the few who want to migrate to Neutron), so this is a
 complementary solution rather than an alternative. Personally I doubt
 there would suddenly be enough people interested in nova-net development
 to successfully spin it out and maintain it. I also agree with Russell
 that long-term fragmentation at this layer of the stack is generally not
 desirable.

I think if you boil everything down, you end up with 3 really important
differences.

1) neutron is a fleet of services (it's very micro service) and every
service requires multiple and different config files. Just configuring
the fleet is a beast if it not devstack (and even if it is)

2) neutron assumes a primary interesting thing to you is tenant secured
self service networks. This is actually explicitly not interesting to a
lot of deploys for policy, security, political reasons/restrictions.

3) neutron open source backend defaults to OVS (largely because #2). OVS
is it's own complicated engine that you need to learn to debug. While
Linux bridge has challenges, it's also something that anyone who's
worked with Linux  Virtualization for the last 10 years has some
experience with.

(also, the devstack setup code for neutron is a rats nest, as it was
mostly not paid attention to. This means it's been 0 help in explaining
anything to people trying to do neutron. For better or worse devstack is
our executable manual for a lot of these things)

so that being said, I think we need to talk about minimum viable
neutron as a model and figure out how far away that is from n-net. This
week at the QA Sprint, Dean, Sean Collins, and I have spent some time
hashing it out, hopefully with something to show the end of the week.
This will be the new devstack code for neutron (the old lib/neutron is
moved to lib/neutron-legacy).

Default setup will be provider networks (which means no tenant
isolation). For that you should only need neutron-api, -dhcp, and -l2.
So #1 is made a bunch better. #2 not a thing at all. And for #3 we'd
like to revert back to linux bridge for the base case (though first code
will probably be OVS because that's the happy path today).

Mixin #1: NEUTRON_BRIDGE_WITH=OVS

First optional layer being flip from linuxbridge - ovs. That becomes
one bite sized thing to flip over once you understand it.

Mixin #2: self service networks

This will be off in the default case, but can be enabled later.

... and turtles all the way up.


Provider networks w/ Linux bridge are really close to the simplicity on
the wire people expected with n-net. The only last really difference is
floating ips. And the problem here was best captured by Sean Collins on
Wed, Floating ips in nova are overloaded. They are both elastic ips, but
they are also how you get public addresses in a default environment.
Dean shared that that dual purpose is entirely due to constraints of the
first NASA cloud which only had a /26 of routable IPs. In neutron this
is just different, you don't need floating ips to have public addresses.
But the mental model has stuck.


Anyway, while I'm not sure this is going to solve everyone's issues, I
think it's a useful exercise anyway for devstack's neutron support to
revert to a minimum viable neutron for learning purposes, and let you
layer on complexity manually over time. And I'd be really curious if a
n-net - provider network side step (still on linux bridge) would
actually be a more reasonable transition for most environments.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-27 Thread Tim Bell
From the stats 
(http://superuser.openstack.org/articles/openstack-user-survey-insights-november-2014),


-43% of production clouds use OVS (the default for Neutron)

-30% of production clouds are Nova network based

-15% of production clouds use linux bridge

There is therefore a significant share of the OpenStack production user 
community who are interested in a simple provider network linux bridge based 
solution.

I think it is important to make an attractive cloud solution  where deployers 
can look at the balance of function and their skills and choose the appropriate 
combinations.

Whether a simple network model should be the default is a different question to 
should there be a simple option. Personally, one of the most regular complaints 
I get is the steep learning curve for a new deployment. If we could make it so 
that people can do it as a series of steps (by making an path to add OVS) 
rather than a large leap, I think that would be attractive.

BTW, CERN is on nova network with 3,200 hypervisors across 2 sites and we're 
interested to move to Neutron to stay mainstream. The CERN network is set up as 
a series of IP services which correspond to broadcast domains. A typical IP 
service is around 200-500 servers with a set of top of the rack switches and 
one or two router uplinks. An IP address is limited to an IP service. We then 
layer a secondary set of IP networks on the hypervisors on the access switches 
which are allocated to VMs. We change router and switch vendor on average every 
5 years as part of public procurement and therefore generic solutions are 
required. Full details of the CERN network can be found at 
http://indico.cern.ch/event/327056/contribution/0/material/slides/0.pdf.

Tim

From: Fox, Kevin M [mailto:kevin@pnnl.gov]
Sent: 27 March 2015 17:36
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to 
Neutron migration work

No, no. Most OpenStack deployments are neutron based with ovs because its the 
default these days.

There are all sorts of warnings to folks for years saying if you start with 
nova-network, there will be pain for you later. Hopefully, that has scared away 
most new folks from doing it. Most of the existing folks are there because they 
started before Neutron was up to speed. Thats a different problem.

So I would expect the number of folks needing to go from nova-network to 
neutron to be a small number of clouds, not a big number. Changing the defaults 
now to favor that small minority of clouds, seems like an odd choice.

Really, I don't think finding the right solution to migrate those still using 
nova-network to neutron has anything to do with what the default out of the box 
experience for new clouds should be...

Having linuxbridge be the default for folks moving from nova-network to neutron 
might make much more sense then saying everyone should by default get 
linuxbridge.

Thanks,
Kevin

From: Dean Troyer [dtro...@gmail.com]
Sent: Friday, March 27, 2015 9:06 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to 
Neutron migration work
On Fri, Mar 27, 2015 at 10:48 AM, Assaf Muller 
amul...@redhat.commailto:amul...@redhat.com wrote:
Looking at the latest user survey, OVS looks to be 3 times as popular as
Linux bridge for production deployments. Having LB as the default seems
like an odd choice. You also wouldn't want to change the default before
LB is tested at the gate.

Simple things need to be simple to accomplish, and defaults MUST be simple to 
use.

LB's support requirements are very simple compared to OVS.  This is an 
achievable first step away from nova-net and once conquered the second step 
becomes less overwhelming.  Look at the success of swallowing the entire 
elephant at once that we've seen in the last $TOO_MANY years.

dt

--

Dean Troyer
dtro...@gmail.commailto:dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-27 Thread Sean M. Collins
Let's also keep in mind that the ML2 Plugin has *both* openvswitch and
linuxbridge mechanism drivers enabled[1]. If I understand things
correctly, this means this discussion shouldn't turn into a debate about
which mechanism everyone prefers, since *both* are enabled.

There is one thing that we do in DevStack currently, where we select the
openvswitch agent[2] by default - I don't know what impact that has when
you want to use linuxbridge as the mechanism. I have to do some more
research, ideally we'd be able to run both OVS and linux bridge
mechanisms by default so that people want OVS get OVS and linuxbridge
people get linuxbridge and we can all live happily ever after. :)


[1]: 
https://github.com/openstack-dev/devstack/blob/master/lib/neutron_plugins/ml2#L25
[2]: 
https://github.com/openstack-dev/devstack/blob/master/lib/neutron_plugins/ml2#L21

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-27 Thread Jay Pipes
On Fri, Mar 27, 2015 at 09:31:39AM +1100, Michael Still wrote:
 Hi,
 
 I thought it would be a good idea to send out a status update for the
 migration from nova-network to Neutron, as there hasn't been as much
 progress as we'd hoped for in Kilo. There are a few issues which have
 been slowing progress down.
 
 First off, creating an all encompassing turn key upgrade is probably
 not possible. This was also never a goal of this effort -- to quote
 the spec for this work, Consequently, the process described here is
 not a “one size fits all” automated push-button tool but a series of
 steps that should be obvious to automate and customise to meet local
 needs [1]. The variety of deployment and configuration options
 available makes a turn key migration very hard to write, and possibly
 impossible to test. We therefore have opted for writing migration
 tools, which allow operators to plug components together in the way
 that makes sense for their deployment and then migrate using those.

As Russell mentioned in an earlier response on this thread, the fact is
that most migrations from nova-net to Neutron would require custom work
to make it happen. Adding documentation on how to do migration from
nova-net to Neutron is a grand idea, but I suspect it will ultimately
fall short of the needs of the (very few) operators that would attempt
such a thing (as opposed to a cold migration from older nova-net zones
to newer greenfield Neutron zones).

 However, talking to operators at the Ops Summit, is has become clear
 that some operators simply aren't interested in moving to Neutron --
 largely because they either aren't interested in tenant networks, or
 have corporate network environments that make deploying Neutron very
 hard. So, even if we provide migration tools, it is still likely that
 we will end up with loyal nova-network users who aren't interested in
 moving. From the Nova perspective, the end goal of all of this effort
 is to delete the nova-network code

Actually, IMO, the end goal should be to provide our end users with the
most stable, simple to deploy and operate, and scalable network as a
service product. The goal shouldn't be the separation or deletion of the
nova-network code -- just as is true that the goal of the Gantt project
should not be the split of the nova-scheduler itself, but rather to
provide the most stable, intuitive and easy-to-use placement engine for
OpenStack end users.

, and if we can't do that because
 some people simply don't want to move, then what is gained by putting
 a lot of effort into migration tooling?

As Sean mentioned (I think), if Neutron was attractive to nova-network
deployers as an alternative handler of cloud network servicing, then
there would be more value in spending time on the nova-network to
Neutron migration.

But, there's the rub. Neutron *isn't* attractive to this set of people
because:

a) It doesn't provide for automatic (sub)net allocation for a user or
tenant in the same way that nova-network just Gets This Done for a user
that wants to launch an instance. As I think Kevin Fox mentioned, a
cloud admin should be able to easily set up a bunch of networks usable
by tenants, and Nova should be able to ask Neutron to just do the
needful and wire up a subnet for use by the instance without the user
needing to create a subnet, a router object, or wiring up the
connectivity themselves. I complained about this very problem (of not
having automated subnet and IP assignments) nearly *two years ago*:

http://lists.openstack.org/pipermail/openstack-dev/2013-July/011981.html

and was told by Neutron core team members that they weren't really
interested in changing Neutron to be more like Nova's network
auto-service behaviours.

b) It is way more complicated to deploy Neutron than nova-network (even
nova-network in multihost mode). While the myriad vendor plugins for L2
and L3 are nice flexibility, they add much complexity to the deployment
of Neutron. Just ask Thomas Goirand, who is currently working on
packaging the Neutron vendor mechanism drivers for Debian, about that
complexity.

c) There's been no demonstration that data plane performance of
nova-network with linux bridging can be beaten by the open source
Neutron SDN solutions. Not having any reliable and transparent
benchmarking that compares the huge matrix of network topologies,
overlay providers, and data plane options is a major reason for the lack
of uptake of Neutron by all but the bravest greenfield deployments.

 Therefore, there is some talk about spinning nova-network out into its
 own project where it could continue to live on and be better
 maintained than the current Nova team is able to do. However, this is
 a relatively new idea and we haven't had a chance to determine how
 feasible it is given where we are in the release cycle. I assume that
 if we did this, we would need to find a core team passionate about
 maintaining nova-network, and we would still need to provide some
 migration 

Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-27 Thread Sean M. Collins
On Fri, Mar 27, 2015 at 01:17:48PM EDT, Jay Pipes wrote:
 But, there's the rub. Neutron *isn't* attractive to this set of people
 because:
 
 a) It doesn't provide for automatic (sub)net allocation for a user or
 tenant in the same way that nova-network just Gets This Done for a user
 that wants to launch an instance. As I think Kevin Fox mentioned, a
 cloud admin should be able to easily set up a bunch of networks usable
 by tenants, and Nova should be able to ask Neutron to just do the
 needful and wire up a subnet for use by the instance without the user
 needing to create a subnet, a router object, or wiring up the
 connectivity themselves. I complained about this very problem (of not
 having automated subnet and IP assignments) nearly *two years ago*:
 
 http://lists.openstack.org/pipermail/openstack-dev/2013-July/011981.html
 
 and was told by Neutron core team members that they weren't really
 interested in changing Neutron to be more like Nova's network
 auto-service behaviours.

I can't speak for others, but I think the subnet allocation API is a
first step towards fixing that[1]. 

On the IPv6 side - I am adamant[2] that it should not require complex
operations since protocols like Prefix Delegation should make
provisioning networking dead simple to the user - similar to how Comcast
deploys IPv6 for residential customers - just plug in. This will be a
big part of my speaking session with Carl[3].

[1]: 
http://specs.openstack.org/openstack/neutron-specs/specs/kilo/subnet-allocation.html
[2]: http://lists.openstack.org/pipermail/openstack-dev/2015-March/059329.html
[3]: 
https://openstacksummitmay2015vancouver.sched.org/event/085f7141a451efc531430dc15d886bb2#.VQyLY0aMVE5

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-27 Thread Fox, Kevin M
I think the main disconnect comes from this

Is NaaS a critical feature of the cloud, or not? nova-network asserts no. The 
neutron team asserts yes, and neutron is being developed with that in mind 
currently. This is a critical assertion that should be discussed.

With my app developer hat on, I tend to agree with NaaS being a requirement for 
a very useful cloud.  Living without it is much like living in the times before 
vm's as a service was a thing. It really hurts to build non trivial apps 
without it.

As a cloud provider, you always need to consider what's the best thing for your 
customers, not yourself. I think the extra pain to setup NaaS has been worth it 
on every cloud I've deployed/used.

Thanks,
Kevin

From: Jay Pipes [jaypi...@gmail.com]
Sent: Friday, March 27, 2015 10:17 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to 
Neutron migration work

On Fri, Mar 27, 2015 at 09:31:39AM +1100, Michael Still wrote:
 Hi,

 I thought it would be a good idea to send out a status update for the
 migration from nova-network to Neutron, as there hasn't been as much
 progress as we'd hoped for in Kilo. There are a few issues which have
 been slowing progress down.

 First off, creating an all encompassing turn key upgrade is probably
 not possible. This was also never a goal of this effort -- to quote
 the spec for this work, Consequently, the process described here is
 not a “one size fits all” automated push-button tool but a series of
 steps that should be obvious to automate and customise to meet local
 needs [1]. The variety of deployment and configuration options
 available makes a turn key migration very hard to write, and possibly
 impossible to test. We therefore have opted for writing migration
 tools, which allow operators to plug components together in the way
 that makes sense for their deployment and then migrate using those.

As Russell mentioned in an earlier response on this thread, the fact is
that most migrations from nova-net to Neutron would require custom work
to make it happen. Adding documentation on how to do migration from
nova-net to Neutron is a grand idea, but I suspect it will ultimately
fall short of the needs of the (very few) operators that would attempt
such a thing (as opposed to a cold migration from older nova-net zones
to newer greenfield Neutron zones).

 However, talking to operators at the Ops Summit, is has become clear
 that some operators simply aren't interested in moving to Neutron --
 largely because they either aren't interested in tenant networks, or
 have corporate network environments that make deploying Neutron very
 hard. So, even if we provide migration tools, it is still likely that
 we will end up with loyal nova-network users who aren't interested in
 moving. From the Nova perspective, the end goal of all of this effort
 is to delete the nova-network code

Actually, IMO, the end goal should be to provide our end users with the
most stable, simple to deploy and operate, and scalable network as a
service product. The goal shouldn't be the separation or deletion of the
nova-network code -- just as is true that the goal of the Gantt project
should not be the split of the nova-scheduler itself, but rather to
provide the most stable, intuitive and easy-to-use placement engine for
OpenStack end users.

, and if we can't do that because
 some people simply don't want to move, then what is gained by putting
 a lot of effort into migration tooling?

As Sean mentioned (I think), if Neutron was attractive to nova-network
deployers as an alternative handler of cloud network servicing, then
there would be more value in spending time on the nova-network to
Neutron migration.

But, there's the rub. Neutron *isn't* attractive to this set of people
because:

a) It doesn't provide for automatic (sub)net allocation for a user or
tenant in the same way that nova-network just Gets This Done for a user
that wants to launch an instance. As I think Kevin Fox mentioned, a
cloud admin should be able to easily set up a bunch of networks usable
by tenants, and Nova should be able to ask Neutron to just do the
needful and wire up a subnet for use by the instance without the user
needing to create a subnet, a router object, or wiring up the
connectivity themselves. I complained about this very problem (of not
having automated subnet and IP assignments) nearly *two years ago*:

http://lists.openstack.org/pipermail/openstack-dev/2013-July/011981.html

and was told by Neutron core team members that they weren't really
interested in changing Neutron to be more like Nova's network
auto-service behaviours.

b) It is way more complicated to deploy Neutron than nova-network (even
nova-network in multihost mode). While the myriad vendor plugins for L2
and L3 are nice flexibility, they add much complexity to the deployment
of Neutron. Just ask Thomas

Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-27 Thread Dean Troyer
On Fri, Mar 27, 2015 at 11:35 AM, Fox, Kevin M kevin@pnnl.gov wrote:

  So I would expect the number of folks needing to go from nova-network to
 neutron to be a small number of clouds, not a big number. Changing the
 defaults now to favor that small minority of clouds, seems like an odd
 choice.


This is not the default for deployments, except for the ignorant people
using DevStack for deployment and they have already self-selected for
failure by doing that in the first place.


 Really, I don't think finding the right solution to migrate those still
 using nova-network to neutron has anything to do with what the default out
 of the box experience for new clouds should be...


Honestly, I don't give a rat's a$$ about the migrations.  But I do care
about the knowledge required to mentally shift from nova-net to neutron.
And we have failed there.  DevStack has totally failed there to make
Neutron usable without having to learn far too much to just get a dev cloud.

Having linuxbridge be the default for folks moving from nova-network to
 neutron might make much more sense then saying everyone should by default
 get linuxbridge.


The complaint was about DevStack using LB as its default.  I DO NOT want
the overhead of OVS on my development VMs when I am not doing any
network-related work. I am not alone here.

Simple things MUST be simple to do.

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-27 Thread Sean Dague
On 03/27/2015 11:48 AM, Assaf Muller wrote:
 
 
 - Original Message -
 On 03/27/2015 05:22 AM, Thierry Carrez wrote:
 snip
 Part of it is corner (or simplified) use cases not being optimally
 served by Neutron, and I think Neutron could more aggressively address
 those. But the other part is ignorance and convenience: that Neutron
 thing is a scary beast, last time I looked into it I couldn't make sense
 of it, and nova-network just works for me.

 That is why during the Ops Summit we discussed coming up with a
 migration guide that would explain the various ways you can use Neutron
 to cover nova-network use cases, demystify a few dark areas, and outline
 the step-by-step manual process you can go through to migrate from one
 to the other.

 We found a dev/ops volunteer for writing that migration guide but he was
 unfortunately not allowed to spend time on this. I heard we have new
 volunteers, but I'll let them announce what their plans are, rather than
 put words in their mouth.

 This migration guide can happen even if we follow the nova-net spinout
 plan (for the few who want to migrate to Neutron), so this is a
 complementary solution rather than an alternative. Personally I doubt
 there would suddenly be enough people interested in nova-net development
 to successfully spin it out and maintain it. I also agree with Russell
 that long-term fragmentation at this layer of the stack is generally not
 desirable.

 I think if you boil everything down, you end up with 3 really important
 differences.

 1) neutron is a fleet of services (it's very micro service) and every
 service requires multiple and different config files. Just configuring
 the fleet is a beast if it not devstack (and even if it is)

 2) neutron assumes a primary interesting thing to you is tenant secured
 self service networks. This is actually explicitly not interesting to a
 lot of deploys for policy, security, political reasons/restrictions.

 3) neutron open source backend defaults to OVS (largely because #2). OVS
 is it's own complicated engine that you need to learn to debug. While
 Linux bridge has challenges, it's also something that anyone who's
 worked with Linux  Virtualization for the last 10 years has some
 experience with.

 (also, the devstack setup code for neutron is a rats nest, as it was
 mostly not paid attention to. This means it's been 0 help in explaining
 anything to people trying to do neutron. For better or worse devstack is
 our executable manual for a lot of these things)

 so that being said, I think we need to talk about minimum viable
 neutron as a model and figure out how far away that is from n-net. This
 week at the QA Sprint, Dean, Sean Collins, and I have spent some time
 hashing it out, hopefully with something to show the end of the week.
 This will be the new devstack code for neutron (the old lib/neutron is
 moved to lib/neutron-legacy).

 Default setup will be provider networks (which means no tenant
 isolation). For that you should only need neutron-api, -dhcp, and -l2.
 So #1 is made a bunch better. #2 not a thing at all. And for #3 we'd
 like to revert back to linux bridge for the base case (though first code
 will probably be OVS because that's the happy path today).

 
 Looking at the latest user survey, OVS looks to be 3 times as popular as
 Linux bridge for production deployments. Having LB as the default seems
 like an odd choice. You also wouldn't want to change the default before
 LB is tested at the gate.

Sure, actually testing defaults is presumed here. I didn't think it
needed to be called out separately.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-27 Thread Assaf Muller


- Original Message -
 On 03/27/2015 05:22 AM, Thierry Carrez wrote:
 snip
  Part of it is corner (or simplified) use cases not being optimally
  served by Neutron, and I think Neutron could more aggressively address
  those. But the other part is ignorance and convenience: that Neutron
  thing is a scary beast, last time I looked into it I couldn't make sense
  of it, and nova-network just works for me.
  
  That is why during the Ops Summit we discussed coming up with a
  migration guide that would explain the various ways you can use Neutron
  to cover nova-network use cases, demystify a few dark areas, and outline
  the step-by-step manual process you can go through to migrate from one
  to the other.
  
  We found a dev/ops volunteer for writing that migration guide but he was
  unfortunately not allowed to spend time on this. I heard we have new
  volunteers, but I'll let them announce what their plans are, rather than
  put words in their mouth.
  
  This migration guide can happen even if we follow the nova-net spinout
  plan (for the few who want to migrate to Neutron), so this is a
  complementary solution rather than an alternative. Personally I doubt
  there would suddenly be enough people interested in nova-net development
  to successfully spin it out and maintain it. I also agree with Russell
  that long-term fragmentation at this layer of the stack is generally not
  desirable.
 
 I think if you boil everything down, you end up with 3 really important
 differences.
 
 1) neutron is a fleet of services (it's very micro service) and every
 service requires multiple and different config files. Just configuring
 the fleet is a beast if it not devstack (and even if it is)
 
 2) neutron assumes a primary interesting thing to you is tenant secured
 self service networks. This is actually explicitly not interesting to a
 lot of deploys for policy, security, political reasons/restrictions.
 
 3) neutron open source backend defaults to OVS (largely because #2). OVS
 is it's own complicated engine that you need to learn to debug. While
 Linux bridge has challenges, it's also something that anyone who's
 worked with Linux  Virtualization for the last 10 years has some
 experience with.
 
 (also, the devstack setup code for neutron is a rats nest, as it was
 mostly not paid attention to. This means it's been 0 help in explaining
 anything to people trying to do neutron. For better or worse devstack is
 our executable manual for a lot of these things)
 
 so that being said, I think we need to talk about minimum viable
 neutron as a model and figure out how far away that is from n-net. This
 week at the QA Sprint, Dean, Sean Collins, and I have spent some time
 hashing it out, hopefully with something to show the end of the week.
 This will be the new devstack code for neutron (the old lib/neutron is
 moved to lib/neutron-legacy).
 
 Default setup will be provider networks (which means no tenant
 isolation). For that you should only need neutron-api, -dhcp, and -l2.
 So #1 is made a bunch better. #2 not a thing at all. And for #3 we'd
 like to revert back to linux bridge for the base case (though first code
 will probably be OVS because that's the happy path today).
 

Looking at the latest user survey, OVS looks to be 3 times as popular as
Linux bridge for production deployments. Having LB as the default seems
like an odd choice. You also wouldn't want to change the default before
LB is tested at the gate.

 Mixin #1: NEUTRON_BRIDGE_WITH=OVS
 
 First optional layer being flip from linuxbridge - ovs. That becomes
 one bite sized thing to flip over once you understand it.
 
 Mixin #2: self service networks
 
 This will be off in the default case, but can be enabled later.
 
 ... and turtles all the way up.
 
 
 Provider networks w/ Linux bridge are really close to the simplicity on
 the wire people expected with n-net. The only last really difference is
 floating ips. And the problem here was best captured by Sean Collins on
 Wed, Floating ips in nova are overloaded. They are both elastic ips, but
 they are also how you get public addresses in a default environment.
 Dean shared that that dual purpose is entirely due to constraints of the
 first NASA cloud which only had a /26 of routable IPs. In neutron this
 is just different, you don't need floating ips to have public addresses.
 But the mental model has stuck.
 
 
 Anyway, while I'm not sure this is going to solve everyone's issues, I
 think it's a useful exercise anyway for devstack's neutron support to
 revert to a minimum viable neutron for learning purposes, and let you
 layer on complexity manually over time. And I'd be really curious if a
 n-net - provider network side step (still on linux bridge) would
 actually be a more reasonable transition for most environments.
 
   -Sean
 
 --
 Sean Dague
 http://dague.net
 
 
 __
 OpenStack Development Mailing List 

Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-27 Thread Mohammad Banikazemi


Sean Dague s...@dague.net wrote on 03/27/2015 07:11:18 AM:

 From: Sean Dague s...@dague.net
 To: openstack-dev@lists.openstack.org
 Date: 03/27/2015 07:12 AM
 Subject: Re: [openstack-dev] [Nova][Neutron] Status of the nova-
 network to Neutron migration work

 On 03/27/2015 05:22 AM, Thierry Carrez wrote:
 snip
  Part of it is corner (or simplified) use cases not being optimally
  served by Neutron, and I think Neutron could more aggressively address
  those. But the other part is ignorance and convenience: that Neutron
  thing is a scary beast, last time I looked into it I couldn't make
sense
  of it, and nova-network just works for me.
 
  That is why during the Ops Summit we discussed coming up with a
  migration guide that would explain the various ways you can use Neutron
  to cover nova-network use cases, demystify a few dark areas, and
outline
  the step-by-step manual process you can go through to migrate from one
  to the other.
 
  We found a dev/ops volunteer for writing that migration guide but he
was
  unfortunately not allowed to spend time on this. I heard we have new
  volunteers, but I'll let them announce what their plans are, rather
than
  put words in their mouth.
 
  This migration guide can happen even if we follow the nova-net spinout
  plan (for the few who want to migrate to Neutron), so this is a
  complementary solution rather than an alternative. Personally I doubt
  there would suddenly be enough people interested in nova-net
development
  to successfully spin it out and maintain it. I also agree with Russell
  that long-term fragmentation at this layer of the stack is generally
not
  desirable.

 I think if you boil everything down, you end up with 3 really important
 differences.

 1) neutron is a fleet of services (it's very micro service) and every
 service requires multiple and different config files. Just configuring
 the fleet is a beast if it not devstack (and even if it is)

 2) neutron assumes a primary interesting thing to you is tenant secured
 self service networks. This is actually explicitly not interesting to a
 lot of deploys for policy, security, political reasons/restrictions.

 3) neutron open source backend defaults to OVS (largely because #2). OVS
 is it's own complicated engine that you need to learn to debug. While
 Linux bridge has challenges, it's also something that anyone who's
 worked with Linux  Virtualization for the last 10 years has some
 experience with.

 (also, the devstack setup code for neutron is a rats nest, as it was
 mostly not paid attention to. This means it's been 0 help in explaining
 anything to people trying to do neutron. For better or worse devstack is
 our executable manual for a lot of these things)

 so that being said, I think we need to talk about minimum viable
 neutron as a model and figure out how far away that is from n-net. This
 week at the QA Sprint, Dean, Sean Collins, and I have spent some time
 hashing it out, hopefully with something to show the end of the week.
 This will be the new devstack code for neutron (the old lib/neutron is
 moved to lib/neutron-legacy).

 Default setup will be provider networks (which means no tenant
 isolation). For that you should only need neutron-api, -dhcp, and -l2.
 So #1 is made a bunch better. #2 not a thing at all. And for #3 we'd
 like to revert back to linux bridge for the base case (though first code
 will probably be OVS because that's the happy path today).


Are you suggesting that for the common use cases that will use the default
setup, the external network connectivity doesn't matter much?

 Mixin #1: NEUTRON_BRIDGE_WITH=OVS

 First optional layer being flip from linuxbridge - ovs. That becomes
 one bite sized thing to flip over once you understand it.

 Mixin #2: self service networks

 This will be off in the default case, but can be enabled later.

 ... and turtles all the way up.


 Provider networks w/ Linux bridge are really close to the simplicity on
 the wire people expected with n-net. The only last really difference is
 floating ips. And the problem here was best captured by Sean Collins on
 Wed, Floating ips in nova are overloaded. They are both elastic ips, but
 they are also how you get public addresses in a default environment.
 Dean shared that that dual purpose is entirely due to constraints of the
 first NASA cloud which only had a /26 of routable IPs. In neutron this
 is just different, you don't need floating ips to have public addresses.
 But the mental model has stuck.


 Anyway, while I'm not sure this is going to solve everyone's issues, I
 think it's a useful exercise anyway for devstack's neutron support to
 revert to a minimum viable neutron for learning purposes, and let you
 layer on complexity manually over time. And I'd be really curious if a
 n-net - provider network side step (still on linux bridge) would
 actually be a more reasonable transition for most environments.

-Sean

 --
 Sean Dague
 http://dague.net

 [attachment

Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-27 Thread Fox, Kevin M
The floating ip only on external netwoks thing has always been a little odd to 
me...

Floating ip's are very important to ensure a user can switch out one instance 
with another and keep 'state' consistent (the other piece being cinder 
volumes). But why can't you do this on a provider network? It really is the 
same thing. You can force the fixed ip to whatever you want, but its a 
completely different mechanism.


On the subject of, we don't need the rest of user defined networking, just 
provider networks, I'd add this:

One of the things I see long term as beneficial that cloud will provide is a 
catalog of open source cloud applications. As a user, you go to the catalog, 
search for... trac for example, and hit launch. easy, done...

As a developer of such templates, its a real pain to have to deal with neutron 
networking vs nova networking, let alone the many different ways of configuring 
neutron. On top of that, one of the great features of NaaS is that you can push 
isolation to the network layer and not have to deal so much with 
authentication. Take ElasticSearch for example. It has no concept of 
authentication since is a backend service. You put it on its own network that 
only the webservers can get to. But that means you can't write a template that 
will work on anything but proper NaaS securely.

So, short term, your not wanting to deal with the complication of a more 
featureful neutron, but your really just pushing the complication to the cloud 
users/app developers, slowing down development of cloud apps, and therefore 
your users experience is diminished since their selection of apps is restricted 
with all sorts of caviots. This application works only if your service 
provider setup up NaaS. Really, the way I see it, its the cloud admin's job to 
deal with complication so that the end users don't have to. Its one of the 
things that makes being a cloud user so great. A few skilled cloud admins can 
make it possible for many many less experienced folks to do amazing things on 
top. The cloud and cloud amdin hides all the complexity from the user.

Lets reduce the fragmentation as much as we can here. it will actually make the 
app ecosystem and user experience much better in the long run.

Thanks,
Kevin

From: Sean Dague [s...@dague.net]
Sent: Friday, March 27, 2015 4:11 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to 
Neutron migration work

On 03/27/2015 05:22 AM, Thierry Carrez wrote:
snip
 Part of it is corner (or simplified) use cases not being optimally
 served by Neutron, and I think Neutron could more aggressively address
 those. But the other part is ignorance and convenience: that Neutron
 thing is a scary beast, last time I looked into it I couldn't make sense
 of it, and nova-network just works for me.

 That is why during the Ops Summit we discussed coming up with a
 migration guide that would explain the various ways you can use Neutron
 to cover nova-network use cases, demystify a few dark areas, and outline
 the step-by-step manual process you can go through to migrate from one
 to the other.

 We found a dev/ops volunteer for writing that migration guide but he was
 unfortunately not allowed to spend time on this. I heard we have new
 volunteers, but I'll let them announce what their plans are, rather than
 put words in their mouth.

 This migration guide can happen even if we follow the nova-net spinout
 plan (for the few who want to migrate to Neutron), so this is a
 complementary solution rather than an alternative. Personally I doubt
 there would suddenly be enough people interested in nova-net development
 to successfully spin it out and maintain it. I also agree with Russell
 that long-term fragmentation at this layer of the stack is generally not
 desirable.

I think if you boil everything down, you end up with 3 really important
differences.

1) neutron is a fleet of services (it's very micro service) and every
service requires multiple and different config files. Just configuring
the fleet is a beast if it not devstack (and even if it is)

2) neutron assumes a primary interesting thing to you is tenant secured
self service networks. This is actually explicitly not interesting to a
lot of deploys for policy, security, political reasons/restrictions.

3) neutron open source backend defaults to OVS (largely because #2). OVS
is it's own complicated engine that you need to learn to debug. While
Linux bridge has challenges, it's also something that anyone who's
worked with Linux  Virtualization for the last 10 years has some
experience with.

(also, the devstack setup code for neutron is a rats nest, as it was
mostly not paid attention to. This means it's been 0 help in explaining
anything to people trying to do neutron. For better or worse devstack is
our executable manual for a lot of these things)

so that being said, I think we need to talk about

Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-27 Thread Mark Voelker
Inline…


On Mar 27, 2015, at 11:48 AM, Assaf Muller amul...@redhat.com wrote:

 
 
 - Original Message -
 On 03/27/2015 05:22 AM, Thierry Carrez wrote:
 snip
 Part of it is corner (or simplified) use cases not being optimally
 served by Neutron, and I think Neutron could more aggressively address
 those. But the other part is ignorance and convenience: that Neutron
 thing is a scary beast, last time I looked into it I couldn't make sense
 of it, and nova-network just works for me.
 
 That is why during the Ops Summit we discussed coming up with a
 migration guide that would explain the various ways you can use Neutron
 to cover nova-network use cases, demystify a few dark areas, and outline
 the step-by-step manual process you can go through to migrate from one
 to the other.
 
 We found a dev/ops volunteer for writing that migration guide but he was
 unfortunately not allowed to spend time on this. I heard we have new
 volunteers, but I'll let them announce what their plans are, rather than
 put words in their mouth.
 
 This migration guide can happen even if we follow the nova-net spinout
 plan (for the few who want to migrate to Neutron), so this is a
 complementary solution rather than an alternative. Personally I doubt
 there would suddenly be enough people interested in nova-net development
 to successfully spin it out and maintain it. I also agree with Russell
 that long-term fragmentation at this layer of the stack is generally not
 desirable.
 
 I think if you boil everything down, you end up with 3 really important
 differences.
 
 1) neutron is a fleet of services (it's very micro service) and every
 service requires multiple and different config files. Just configuring
 the fleet is a beast if it not devstack (and even if it is)
 
 2) neutron assumes a primary interesting thing to you is tenant secured
 self service networks. This is actually explicitly not interesting to a
 lot of deploys for policy, security, political reasons/restrictions.
 
 3) neutron open source backend defaults to OVS (largely because #2). OVS
 is it's own complicated engine that you need to learn to debug. While
 Linux bridge has challenges, it's also something that anyone who's
 worked with Linux  Virtualization for the last 10 years has some
 experience with.
 
 (also, the devstack setup code for neutron is a rats nest, as it was
 mostly not paid attention to. This means it's been 0 help in explaining
 anything to people trying to do neutron. For better or worse devstack is
 our executable manual for a lot of these things)
 
 so that being said, I think we need to talk about minimum viable
 neutron as a model and figure out how far away that is from n-net. This
 week at the QA Sprint, Dean, Sean Collins, and I have spent some time
 hashing it out, hopefully with something to show the end of the week.
 This will be the new devstack code for neutron (the old lib/neutron is
 moved to lib/neutron-legacy).
 
 Default setup will be provider networks (which means no tenant
 isolation). For that you should only need neutron-api, -dhcp, and -l2.
 So #1 is made a bunch better. #2 not a thing at all. And for #3 we'd
 like to revert back to linux bridge for the base case (though first code
 will probably be OVS because that's the happy path today).
 
 
 Looking at the latest user survey, OVS looks to be 3 times as popular as

3x as popular *with existing Neutron users* though.  Not people that are still 
running nova-net.  I think we have to bear in mind here that when we’re looking 
at user survey results we’re looking at a general audience of OpenStack users, 
and what we’re trying to solve on this thread is a specific subset of that 
audience.  The very fact that those people are still running nova-net may be a 
good indicator that they don’t find the Neutron choices that lots of other 
people have made to be a good fit for their particular use cases (else they’d 
have switched by now).  We got some reinforcement of this idea during 
discussion at the Operator’s Midcycle Meetup in Philadelphia: the feedback from 
nova-net users that I heard was that OVS+Neutron was too complicated and too 
hard to debug compared to what they have today, hence they didn’t find it a 
compelling option.  

Linux Bridge is, in the eyes of many folks in that room, a simpler model in 
terms of operating and debugging so I think it’s likely a very reasonable for 
this group of users.  However in the interest of ensuring that those operators 
have a chance to chime in here, I’ve added openstack-operators to the thread.

At Your Service,

Mark T. Voelker


 Linux bridge for production deployments. Having LB as the default seems
 like an odd choice. You also wouldn't want to change the default before
 LB is tested at the gate.
 
 Mixin #1: NEUTRON_BRIDGE_WITH=OVS
 
 First optional layer being flip from linuxbridge - ovs. That becomes
 one bite sized thing to flip over once you understand it.
 
 Mixin #2: self service networks
 
 This will 

Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-27 Thread Fox, Kevin M
No, no. Most OpenStack deployments are neutron based with ovs because its the 
default these days.

There are all sorts of warnings to folks for years saying if you start with 
nova-network, there will be pain for you later. Hopefully, that has scared away 
most new folks from doing it. Most of the existing folks are there because they 
started before Neutron was up to speed. Thats a different problem.

So I would expect the number of folks needing to go from nova-network to 
neutron to be a small number of clouds, not a big number. Changing the defaults 
now to favor that small minority of clouds, seems like an odd choice.

Really, I don't think finding the right solution to migrate those still using 
nova-network to neutron has anything to do with what the default out of the box 
experience for new clouds should be...

Having linuxbridge be the default for folks moving from nova-network to neutron 
might make much more sense then saying everyone should by default get 
linuxbridge.

Thanks,
Kevin

From: Dean Troyer [dtro...@gmail.com]
Sent: Friday, March 27, 2015 9:06 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to 
Neutron migration work

On Fri, Mar 27, 2015 at 10:48 AM, Assaf Muller 
amul...@redhat.commailto:amul...@redhat.com wrote:
Looking at the latest user survey, OVS looks to be 3 times as popular as
Linux bridge for production deployments. Having LB as the default seems
like an odd choice. You also wouldn't want to change the default before
LB is tested at the gate.

Simple things need to be simple to accomplish, and defaults MUST be simple to 
use.

LB's support requirements are very simple compared to OVS.  This is an 
achievable first step away from nova-net and once conquered the second step 
becomes less overwhelming.  Look at the success of swallowing the entire 
elephant at once that we've seen in the last $TOO_MANY years.

dt

--

Dean Troyer
dtro...@gmail.commailto:dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-27 Thread Sean M. Collins
On Fri, Mar 27, 2015 at 11:11:42AM EDT, Mohammad Banikazemi wrote:
 Are you suggesting that for the common use cases that will use the default
 setup, the external network connectivity doesn't matter much?

No, if anything the reverse. The default will have external connectivity
by default, by using provider networks or flat networking.

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-27 Thread Dean Troyer
On Fri, Mar 27, 2015 at 10:48 AM, Assaf Muller amul...@redhat.com wrote:

 Looking at the latest user survey, OVS looks to be 3 times as popular as
 Linux bridge for production deployments. Having LB as the default seems
 like an odd choice. You also wouldn't want to change the default before
 LB is tested at the gate.


Simple things need to be simple to accomplish, and defaults MUST be simple
to use.

LB's support requirements are very simple compared to OVS.  This is an
achievable first step away from nova-net and once conquered the second step
becomes less overwhelming.  Look at the success of swallowing the entire
elephant at once that we've seen in the last $TOO_MANY years.

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-27 Thread Steve Wormley
So, I figured I'd weigh in on this as an employee of a nova-network using
company.

Nova-network allowed us to do a couple things simply.

1. Attach openstack networks to our existing VLANs using our existing
firewall/gateway and allow easy access to hardware such as database servers
and storage on the same VLAN.
2. Floating IPs managed at each compute node(multi-host) and via the
standard nova API calls.
3. Access to our instances via their private IP addresses from inside the
company(see 1)

Our forklift replacement to neutron(as we know we can't 'migrate') is at
the following state.
2 meant we can't use pure provider VLAN networks so we had to wait for DVR
VLAN support to work.

Now that that works, I had to go in and convince Neutron to let me
configure my own gateways as the next hop instead of the central SNAT
gateway's assigned IP. This also required making it so the distributed L3
agents could do ARP for the 'real' gateway on the subnet.

Item 3 works fine until a floating IP is assigned. For nova-network this
was trivial connection tracked routing sending packets that reached an
instance via its private IP back out the private VLAN and everything else
via the assigned public IP.

Neutron, OVS and the various veth connections between them means I can't
use packet marking between instances and the router NS, between that and a
whole bunch of other things we had to borrow some IP header bits to track
where a packet came in so if a response to that connection hit the DVR
router it could be sent back out the private network.

And for the next week I get to try and make this all python code so we can
actually finally test it without hand crafted iptables and OVS rules.

For our model most of the Neutron features are wasted, but as we've been
told that nova-network is going away we're going to figure out how to make
Neutron work going forward.

-Steve Wormley
Not really speaking for my employer
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-26 Thread Russell Bryant
On 03/26/2015 06:31 PM, Michael Still wrote:
 Hi,
 
 I thought it would be a good idea to send out a status update for the
 migration from nova-network to Neutron, as there hasn't been as much
 progress as we'd hoped for in Kilo. There are a few issues which have
 been slowing progress down.

Thanks for writing up the status!

 First off, creating an all encompassing turn key upgrade is probably
 not possible. This was also never a goal of this effort -- to quote
 the spec for this work, Consequently, the process described here is
 not a “one size fits all” automated push-button tool but a series of
 steps that should be obvious to automate and customise to meet local
 needs [1]. The variety of deployment and configuration options
 available makes a turn key migration very hard to write, and possibly
 impossible to test. We therefore have opted for writing migration
 tools, which allow operators to plug components together in the way
 that makes sense for their deployment and then migrate using those.

Yes, I'm quite convinced that it will end up being a fairly custom
effort for virtually all deployments complex enough where just starting
over or cold migration isn't an option.

 However, talking to operators at the Ops Summit, is has become clear
 that some operators simply aren't interested in moving to Neutron --
 largely because they either aren't interested in tenant networks, or
 have corporate network environments that make deploying Neutron very
 hard. 

I totally get point #1: nova-network has less features, but I don't
need the rest, and nova-network is rock solid for me.

I'm curious about the second point about Neutron being more difficult to
deploy than nova-network.  That's interesting because it actually seems
like Neutron is more flexible when it comes to integration with existing
networks.  Do you know any more details?  If not, perhaps those with
that concern could fill in with some detail here?

 So, even if we provide migration tools, it is still likely that
 we will end up with loyal nova-network users who aren't interested in
 moving. From the Nova perspective, the end goal of all of this effort
 is to delete the nova-network code, and if we can't do that because
 some people simply don't want to move, then what is gained by putting
 a lot of effort into migration tooling?

To me it comes down to the reasons people don't want to move.  I'd like
to dig into exactly why people don't want to use Neutron.  If there are
legitimate reasons why nova-network will work better, then Neutron has
not met parity and we're certainly not ready to deprecate nova-network.

I still think getting down to a single networking project should be the
end goal.  The confusion around networking choices has been detrimental
to OpenStack.

 Therefore, there is some talk about spinning nova-network out into its
 own project where it could continue to live on and be better
 maintained than the current Nova team is able to do. However, this is
 a relatively new idea and we haven't had a chance to determine how
 feasible it is given where we are in the release cycle. I assume that
 if we did this, we would need to find a core team passionate about
 maintaining nova-network, and we would still need to provide some
 migration tooling for operators who are keen to move to Neutron.
 However, that migration tooling would be less critical than it is now.

From a purely technical perspective, it seems like quite a bit of work.
 It reminds me of we'll just split the scheduler out, and we see how
long that's taking in practice.  I really think all of that effort is
better spent just improving Neutron.

From a community perspective, I'm not thrilled about long term
fragmentation for such a fundamental piece of our stack.  So, I'd really
like to dig into the current state of gaps between Neutron and
nova-network.  If there were no real gaps, there would be no sensible
argument to keep the 2nd option.

 Unfortunately, this has all come to a head at a time when the Nova
 team is heads down getting the Kilo release out the door. We simply
 don't have the time at the moment to properly consider these issues.
 So, I'd like to ask for us to put a pause on this current work until
 we have Kilo done. These issues are complicated and important, so I
 feel we shouldn't rush them at a time we are distracted.

Makes sense.  It seems clear this has now pushed back another release
(at least).

 Finally, I want to reinforce that the position we currently find
 ourselves in isn't because of a lack of effort. Oleg, Angus and Anita
 have all worked very hard on this problem during Kilo, and it is
 frustrating that we haven't managed to find a magic bullet to solve
 all of these problems. I want to personally thank each of them for
 their efforts this cycle on this relatively thankless task.

++

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage 

Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-26 Thread Kyle Mestery
On Thu, Mar 26, 2015 at 7:58 PM, Russell Bryant rbry...@redhat.com wrote:

 On 03/26/2015 06:31 PM, Michael Still wrote:
  Hi,
 
  I thought it would be a good idea to send out a status update for the
  migration from nova-network to Neutron, as there hasn't been as much
  progress as we'd hoped for in Kilo. There are a few issues which have
  been slowing progress down.

 Thanks for writing up the status!

  First off, creating an all encompassing turn key upgrade is probably
  not possible. This was also never a goal of this effort -- to quote
  the spec for this work, Consequently, the process described here is
  not a “one size fits all” automated push-button tool but a series of
  steps that should be obvious to automate and customise to meet local
  needs [1]. The variety of deployment and configuration options
  available makes a turn key migration very hard to write, and possibly
  impossible to test. We therefore have opted for writing migration
  tools, which allow operators to plug components together in the way
  that makes sense for their deployment and then migrate using those.

 Yes, I'm quite convinced that it will end up being a fairly custom
 effort for virtually all deployments complex enough where just starting
 over or cold migration isn't an option.

  However, talking to operators at the Ops Summit, is has become clear
  that some operators simply aren't interested in moving to Neutron --
  largely because they either aren't interested in tenant networks, or
  have corporate network environments that make deploying Neutron very
  hard.

 I totally get point #1: nova-network has less features, but I don't
 need the rest, and nova-network is rock solid for me.

 I'm curious about the second point about Neutron being more difficult to
 deploy than nova-network.  That's interesting because it actually seems
 like Neutron is more flexible when it comes to integration with existing
 networks.  Do you know any more details?  If not, perhaps those with
 that concern could fill in with some detail here?

  So, even if we provide migration tools, it is still likely that
  we will end up with loyal nova-network users who aren't interested in
  moving. From the Nova perspective, the end goal of all of this effort
  is to delete the nova-network code, and if we can't do that because
  some people simply don't want to move, then what is gained by putting
  a lot of effort into migration tooling?

 To me it comes down to the reasons people don't want to move.  I'd like
 to dig into exactly why people don't want to use Neutron.  If there are
 legitimate reasons why nova-network will work better, then Neutron has
 not met parity and we're certainly not ready to deprecate nova-network.

 I still think getting down to a single networking project should be the
 end goal.  The confusion around networking choices has been detrimental
 to OpenStack.

  Therefore, there is some talk about spinning nova-network out into its
  own project where it could continue to live on and be better
  maintained than the current Nova team is able to do. However, this is
  a relatively new idea and we haven't had a chance to determine how
  feasible it is given where we are in the release cycle. I assume that
  if we did this, we would need to find a core team passionate about
  maintaining nova-network, and we would still need to provide some
  migration tooling for operators who are keen to move to Neutron.
  However, that migration tooling would be less critical than it is now.

 From a purely technical perspective, it seems like quite a bit of work.
  It reminds me of we'll just split the scheduler out, and we see how
 long that's taking in practice.  I really think all of that effort is
 better spent just improving Neutron.

 From a community perspective, I'm not thrilled about long term
 fragmentation for such a fundamental piece of our stack.  So, I'd really
 like to dig into the current state of gaps between Neutron and
 nova-network.  If there were no real gaps, there would be no sensible
 argument to keep the 2nd option.

 I agree with Russell here. After talking to a few folks, my sense is there
is still a misunderstanding between people running nova-network and those
developing Neutron. I realize not everyone wants tenant networks, and I
think we can look at the use case for that and see how to map it to what
Neutron has, and fill in any missing gaps. There are some discussions
started already to see how we can fill those gaps.


  Unfortunately, this has all come to a head at a time when the Nova
  team is heads down getting the Kilo release out the door. We simply
  don't have the time at the moment to properly consider these issues.
  So, I'd like to ask for us to put a pause on this current work until
  we have Kilo done. These issues are complicated and important, so I
  feel we shouldn't rush them at a time we are distracted.

 Makes sense.  It seems clear this has now pushed back another release
 (at