Re: [openstack-dev] [neutron] New cycle started. What are you up to, folks?

2015-10-08 Thread Cathy Zhang
Hi Ihar,

Thank you for starting this thread. Here is what on top of my mind for Mitaka:

I will be busy with work centered around networking-sfc project. 
Networking-sfc is a sub project of Neutron (part of the Neutron Stadium), 
which provides a service function chain API and related functionality.   
It has a service chain plugin similar to the ML2 architecture. 
On the southbound it could integrate with different SFC drivers
such as OVS SFC driver, ONOS SFC driver etc.. It allows different data path 
encapsulation
mechanisms. All the specifications have been reviewed and merged into the 
networking-sfc repo. 
In the next cycle, I plan to 

1. continue leading the project team to finish the review and update of the 
code patches
2. Get the last piece of codes (OVS Agent and UT) tested and uploaded for review
3. Add API/Functional/FullStack tests
4. Have the codes pass all the gating testing to ensure no collateral damage
5. Get all the codes fully reviewed, and then merged to be ready for release in 
Mitaka. 
5. We will give a deep dive presentation on the SFC project, future roadmap, 
and demo the functionality
   Following are some work items in the future roadmap: 
   1) integrate with Container to support a mixed chain of Service functions 
running on VMs or on containers
   2) add a registration API to bring the physical service function devices 
into the chain and 
  support a mixed of service functions running on VMs, or containers, or 
physical devices
   3) Support OpenStack SFC integration with ONOS controller 
7. Contribute to the common classifier model (to be used in SFC, QoS, Tap as a 
Service, FWaaS, Security Group etc) 
   and leverage the flow classifier design and codes developed in 
networking-sfc.
8. Contribute to port forwarding in neutron which could leverage the work done 
in networking-sfc. 
9. Help with Neutron code review, bug fix, and testing as much as possible. 

Here are some information links on the networking-sfc project for your 
reference: 
http://docs.openstack.org/developer/networking-sfc/
https://github.com/openstack/networking-sfc
https://review.openstack.org/#/q/status:open+project:openstack/networking-sfc+branch:master+topic:networking-sfc,n,z
https://wiki.openstack.org/wiki/Meetings/ServiceFunctionChainingMeeting

Thanks,
Cathy



-Original Message-
From: Ihar Hrachyshka [mailto:ihrac...@redhat.com] 
Sent: Thursday, October 01, 2015 6:45 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [neutron] New cycle started. What are you up to, folks?

Hi all,

I talked recently with several contributors about what each of us plans for the 
next cycle, and found it’s quite useful to share thoughts with others, because 
you have immediate yay/nay feedback, and maybe find companions for next 
adventures, and what not. So I’ve decided to ask everyone what you see the team 
and you personally doing the next cycle, for fun or profit.

That’s like a PTL nomination letter, but open to everyone! :) No commitments, 
no deadlines, just list random ideas you have in mind or in your todo lists, 
and we’ll all appreciate the huge pile of awesomeness no one will ever have 
time to implement even if scheduled for Xixao release.

To start the fun, I will share my silly ideas in the next email.

Ihar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] New cycle started. What are you up to, folks?

2015-10-07 Thread Anna Kamyshnikova
I can' say that I have any great plans for this cycle, but I would like
look into L3 HA (L3 HA + DVR) feature, probably some bugfixes in this area
and online data migration as logical continuation of online migration
support that was done in Liberty.

On Tue, Oct 6, 2015 at 8:34 PM, Ihar Hrachyshka  wrote:

> > On 06 Oct 2015, at 19:10, Thomas Goirand  wrote:
> >
> > On 10/01/2015 03:45 PM, Ihar Hrachyshka wrote:
> >> Hi all,
> >>
> >> I talked recently with several contributors about what each of us plans
> for the next cycle, and found it’s quite useful to share thoughts with
> others, because you have immediate yay/nay feedback, and maybe find
> companions for next adventures, and what not. So I’ve decided to ask
> everyone what you see the team and you personally doing the next cycle, for
> fun or profit.
> >>
> >> That’s like a PTL nomination letter, but open to everyone! :) No
> commitments, no deadlines, just list random ideas you have in mind or in
> your todo lists, and we’ll all appreciate the huge pile of awesomeness no
> one will ever have time to implement even if scheduled for Xixao release.
> >>
> >> To start the fun, I will share my silly ideas in the next email.
> >>
> >> Ihar
> >
> > Could we have oslo-config-generator flat neutron.conf as a release goal
> > for Mitaka as well? The current configuration layout makes it difficult
> > for distributions to catch-up with working by default config.
>
> Good idea. I think we had some patches for that. I will try to keep it on
> my plate for M.
>
> Ihar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards,
Ann Kamyshnikova
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] New cycle started. What are you up to, folks?

2015-10-07 Thread Assaf Muller
On Wed, Oct 7, 2015 at 5:44 AM, Anna Kamyshnikova <
akamyshnik...@mirantis.com> wrote:

> I can' say that I have any great plans for this cycle, but I would like
> look into L3 HA (L3 HA + DVR) feature,
>

The agent side patch was merged yesterday, and the server side patch needs
reviews: https://review.openstack.org/#/c/143169/.
Your work in L3 HA land is greatly appreciated :)


> probably some bugfixes in this area and online data migration as logical
> continuation of online migration support that was done in Liberty.
>
> On Tue, Oct 6, 2015 at 8:34 PM, Ihar Hrachyshka 
> wrote:
>
>> > On 06 Oct 2015, at 19:10, Thomas Goirand  wrote:
>> >
>> > On 10/01/2015 03:45 PM, Ihar Hrachyshka wrote:
>> >> Hi all,
>> >>
>> >> I talked recently with several contributors about what each of us
>> plans for the next cycle, and found it’s quite useful to share thoughts
>> with others, because you have immediate yay/nay feedback, and maybe find
>> companions for next adventures, and what not. So I’ve decided to ask
>> everyone what you see the team and you personally doing the next cycle, for
>> fun or profit.
>> >>
>> >> That’s like a PTL nomination letter, but open to everyone! :) No
>> commitments, no deadlines, just list random ideas you have in mind or in
>> your todo lists, and we’ll all appreciate the huge pile of awesomeness no
>> one will ever have time to implement even if scheduled for Xixao release.
>> >>
>> >> To start the fun, I will share my silly ideas in the next email.
>> >>
>> >> Ihar
>> >
>> > Could we have oslo-config-generator flat neutron.conf as a release goal
>> > for Mitaka as well? The current configuration layout makes it difficult
>> > for distributions to catch-up with working by default config.
>>
>> Good idea. I think we had some patches for that. I will try to keep it on
>> my plate for M.
>>
>> Ihar
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Regards,
> Ann Kamyshnikova
> Mirantis, Inc
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] New cycle started. What are you up to, folks?

2015-10-07 Thread Edgar Magana
Hello All,

Probably quite late answering this email but I want to add our contributions.

We will be testing Neutron at scale, so we will be contributing to bugs opened 
based on the outcome of our tests. We will be also testing concurrency, we are 
leveraging rally for this goal and also expecting some bugs that we can close.

On the Documentation, I want to incorporate the sections that we have missing 
and make the guide a very good networking resources for users/operators.

We don’t have any feature request right now, it is hard to catch up with the 
contributions so far but maybe a bit testing on the LBaaS area.

Looking forward to seeing you all in Tokyo,

Edgar





On 10/1/15, 6:45 AM, "Ihar Hrachyshka"  wrote:

>Hi all,
>
>I talked recently with several contributors about what each of us plans for 
>the next cycle, and found it’s quite useful to share thoughts with others, 
>because you have immediate yay/nay feedback, and maybe find companions for 
>next adventures, and what not. So I’ve decided to ask everyone what you see 
>the team and you personally doing the next cycle, for fun or profit.
>
>That’s like a PTL nomination letter, but open to everyone! :) No commitments, 
>no deadlines, just list random ideas you have in mind or in your todo lists, 
>and we’ll all appreciate the huge pile of awesomeness no one will ever have 
>time to implement even if scheduled for Xixao release.
>
>To start the fun, I will share my silly ideas in the next email.
>
>Ihar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] New cycle started. What are you up to, folks?

2015-10-07 Thread Henry Gessau
Thanks Ihar. Here is what I plan to work on, or hope to help out with:

Continue working on alembic to support online migrations.

The alembic migrations seem to be a mysterious thing to many developers. I plan
to improve the devref documentation around this. The --autogenerate of
revisions, in particular, should work smoothly for developers and across the
various neutron stadium projects.

In Liberty each Neutron sub-project has a separate alembic environment and that
prevents sub-projects from having migration dependencies on other sub-projects.
So I will fix that.

Neutron's DB transaction issues at scale are a problem. I am not an expert in
this area, but would like to co-ordinate an effort with Mike Bayer and experts
from Neutron and other OpenStack projects to characterize the issues and plan
some solutions and/or best practices.

Neutron's IPv6 support is now quite extensive, but there are still some loose
ends like router HA support, and being able to use IPv6 for API/management. I
will help to co-ordinate the IPv6 updates in Mitaka. I will also sync up with
the state of IPv6 support in the *aaS and stadium repos and help out with IPv6
efforts there.

I think Salvatore's proposal [1] for API evolution is extremely important to
Neutron's future and I would like to see it happen this cycle. I will help out
where I can.

I want to help (mostly by reviewing and testing) the effort to make Neutron
support Python 3.

Always willing to help with, review, and improve Neutron's tests and testing
strategies.

[1] https://review.openstack.org/136760

-- 
Henry


On Thu, Oct 01, 2015, Ihar Hrachyshka  wrote:
> Hi all,
> 
> I talked recently with several contributors about what each of us plans for
> the next cycle, and found it’s quite useful to share thoughts with others,
> because you have immediate yay/nay feedback, and maybe find companions for
> next adventures, and what not. So I’ve decided to ask everyone what you see
> the team and you personally doing the next cycle, for fun or profit.
> 
> That’s like a PTL nomination letter, but open to everyone! :) No commitments,
> no deadlines, just list random ideas you have in mind or in your todo lists,
> and we’ll all appreciate the huge pile of awesomeness no one will ever have
> time to implement even if scheduled for Xixao release.
> 
> To start the fun, I will share my silly ideas in the next email.
> 
> Ihar


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] New cycle started. What are you up to, folks?

2015-10-06 Thread Ihar Hrachyshka
> On 06 Oct 2015, at 19:10, Thomas Goirand  wrote:
> 
> On 10/01/2015 03:45 PM, Ihar Hrachyshka wrote:
>> Hi all,
>> 
>> I talked recently with several contributors about what each of us plans for 
>> the next cycle, and found it’s quite useful to share thoughts with others, 
>> because you have immediate yay/nay feedback, and maybe find companions for 
>> next adventures, and what not. So I’ve decided to ask everyone what you see 
>> the team and you personally doing the next cycle, for fun or profit.
>> 
>> That’s like a PTL nomination letter, but open to everyone! :) No 
>> commitments, no deadlines, just list random ideas you have in mind or in 
>> your todo lists, and we’ll all appreciate the huge pile of awesomeness no 
>> one will ever have time to implement even if scheduled for Xixao release.
>> 
>> To start the fun, I will share my silly ideas in the next email.
>> 
>> Ihar
> 
> Could we have oslo-config-generator flat neutron.conf as a release goal
> for Mitaka as well? The current configuration layout makes it difficult
> for distributions to catch-up with working by default config.

Good idea. I think we had some patches for that. I will try to keep it on my 
plate for M.

Ihar


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] New cycle started. What are you up to, folks?

2015-10-06 Thread Thomas Goirand
On 10/01/2015 03:45 PM, Ihar Hrachyshka wrote:
> Hi all,
> 
> I talked recently with several contributors about what each of us plans for 
> the next cycle, and found it’s quite useful to share thoughts with others, 
> because you have immediate yay/nay feedback, and maybe find companions for 
> next adventures, and what not. So I’ve decided to ask everyone what you see 
> the team and you personally doing the next cycle, for fun or profit.
> 
> That’s like a PTL nomination letter, but open to everyone! :) No commitments, 
> no deadlines, just list random ideas you have in mind or in your todo lists, 
> and we’ll all appreciate the huge pile of awesomeness no one will ever have 
> time to implement even if scheduled for Xixao release.
> 
> To start the fun, I will share my silly ideas in the next email.
> 
> Ihar

Could we have oslo-config-generator flat neutron.conf as a release goal
for Mitaka as well? The current configuration layout makes it difficult
for distributions to catch-up with working by default config.

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] New cycle started. What are you up to, folks?

2015-10-05 Thread Vikram Choudhary
On my top of mind for Mitaka:


*Introducing common Classifier Model [1]:*Currently, neutron service/s
(e.g. Service Function Chaining, QoS, Tap as a Service, FWaaS, Security
Group etc) which requires traffic classification defines their own
classifier model. This introduces redundancy. In order to address
redundancy and easier code maintenance, we propose to have a common
classifier resource in Neutron which can be leveraged by multiple services
in turn.


*Enhancing QoS functionality [2]:*
It is great to have QoS framework merged to Neutron during last release
cycle. Now it's the time to consolidate and make the existing QoS
functionality richer by implementing few of the simple and important ideas
listed below:
-> Support for Linux-Bridge driver. *[2a]*
-> Adding VLAN priority tagging functionality.* [2b]*
-> Adding ECN functionality. *[2c]*


*Enhancing BGP framework [3]:*
Adding to Carl, I would also like to analyze
-> How we can merge ongoing projects like BGPVPN *[3a]* and Edge VPN *[3b]*
with neutron's BGP framework. Will it be reasonable doing that?
-> Route policing
*[3d]*
-> Advertising VPN routes. *[3e]*

*Stabilizing networking-onos [4]:*
networking-onos was introduced in the last liberty cycle. We eye on the
following items for this project:
-> Enhance current stability and reliability. *[4a]*
-> Introduce devstack support. *[4b]*
-> Develop networking-sfc driver and provide SFC PoC using networking-sfc
and ONOS controller.


*Stabilizing networking-sfc [5]:*
Will work on the current patches*[5a]* and get the code-in.

*References:*
[1].. https://bugs.launchpad.net/neutron/+bug/1476527

[2]..
https://github.com/openstack/neutron/blob/master/doc/source/devref/quality_of_service.rst
[2a]..
https://blueprints.launchpad.net/neutron/+spec/ml2-lb-ratelimit-support
[2b].. https://blueprints.launchpad.net/neutron/+spec/vlan-802.1p-qos
[2c]..
https://blueprints.launchpad.net/neutron/+spec/explicit-congestion-notification

[3].. https://blueprints.launchpad.net/neutron/+spec/bgp-dynamic-routing
[3a].. https://github.com/openstack/networking-bgpvpn
[3b].. https://github.com/stackforge/networking-edge-vpn
[3c]..
https://blueprints.launchpad.net/neutron/+spec/neutron-route-policy-support-for-dynamic-routing-protocol
[3d]..
https://blueprints.launchpad.net/neutron/+spec/prefix-clashing-issue-with-dynamic-routing-protocol

[4].. https://github.com/openstack/networking-onos
[4a].. https://bugs.launchpad.net/networking-onos
[4b].. https://bugs.launchpad.net/networking-onos/+bug/1488907

[5].. https://github.com/openstack/networking-sfc
[5a]..
https://review.openstack.org/#/q/status:open+project:openstack/networking-sfc+branch:master+topic:networking-sfc,n,z

Thanks
Vikram

On Tue, Oct 6, 2015 at 6:08 AM, Carl Baldwin  wrote:

> (Cross-posting to the operators list for feedback)
>
> Thank you Ihar for starting this up.  In the absence of any kind of
> blog or other outlet of my own to disseminate this, let me share my
> plans here...
>
> Routed Networks:
>
> My plans for Mitaka (and beyond) are around routed networks.  During
> Liberty, I saw the request from operators in the form of an RFE [1]
> (one of the first in the new RFE process I think).  The request
> resonated with me because I had been thinking of doing this since
> almost the day I started working on HP's cloud.  We went on to define
> use cases around this to understand what operators are looking for
> [2].  It seemed to me like we were on the right track.
>
> I went to work and put up a spec which proposed using provider
> networks for each of the network segments.  It used one more instance
> of a Network for the routed network and a sort of provider router
> which connected it to the provider Networks for the segments.  This
> was a short cut and the community called me on it [3].  Despite some
> existing leakage of L3 in to the supposedly L2 only Network model, the
> community wants to keep Networks L2 only.  This makes a lot of sense
> and I think we'll end up with a better long-term solution even if we
> need to go through a little more pain to get there.  I've started a
> new proposal [4] which adds two new L3 constructs to the model.  They
> are RoutedNetworkGroup and SubnetGroup.
>
> A RoutedNetworkGroup groups Networks.  It represents a bunch of L2
> segments that are mutually reachable via L3.  In other words, there
> are a bunch of L2 networks, each with its own subnet(s).  All of the
> subnets of the other networks are reachable through the default routes
> of each of the networks.
>
> We will be able to create ports using a group as a hint instead of
> specifying a Network explicitly.  To accomplish this, the proposal
> adds a new mechanism to map hosts to Networks.  In our mid-cycle, we
> talked through a flow where Neutron could filter host candidates given
> a group as a hint based on IP availability on the underlying Networks.
> Port creation would be passed the same group hint and a specific host
> to 

Re: [openstack-dev] [neutron] New cycle started. What are you up to, folks?

2015-10-05 Thread Hirofumi Ichihara
Hi,

I have some plans in Mitaka cycle.

1. AZ support[1]
 - I proposed AZ support in Liberty but the millstone is Mitaka now. The spec 
has been merged in Mitaka.
   I keep to propose the patches on Gerrit.
2. LinuxbrideDVR
 - I'm trying to create concrete implementation and then I achieve it nearly. I 
will propose BP or RFE bug ASAP.
3. Router status update[2]
 - I proposed it as bug[3] but some folks disagreed with this. I will classify 
the requirements and propose the implementation.
4. Devstack
 - In Liberty cycle, I was aimed at removing plugin restriction from devstack 
so that vendor plugin maintainers easily keep to maintain the code in their 
tree.
   And also I tried to improve the quality for neutron. I’ve already finished 
some works.
   I will keep to contribute to devstack for neutron in Mitaka cycle including 
discussion about neutron repo vs devstack repo.
   If you have trouble with devstack related to neutron(especially devstack 
plugin), I can help you.
5. More
 - I keep to contribute something else(especially logic for operators) for 
neutron :)

[1]: https://blueprints.launchpad.net/neutron/+spec/add-availability-zone 

[2]: https://blueprints.launchpad.net/neutron/+spec/l3-router-status 

[3]: https://bugs.launchpad.net/neutron/+bug/1341290 


Thanks,
Hirofumi

> On 2015/10/01, at 23:02, Ihar Hrachyshka  wrote:
> 
>> On 01 Oct 2015, at 15:45, Ihar Hrachyshka  wrote:
>> 
>> Hi all,
>> 
>> I talked recently with several contributors about what each of us plans for 
>> the next cycle, and found it’s quite useful to share thoughts with others, 
>> because you have immediate yay/nay feedback, and maybe find companions for 
>> next adventures, and what not. So I’ve decided to ask everyone what you see 
>> the team and you personally doing the next cycle, for fun or profit.
>> 
>> That’s like a PTL nomination letter, but open to everyone! :) No 
>> commitments, no deadlines, just list random ideas you have in mind or in 
>> your todo lists, and we’ll all appreciate the huge pile of awesomeness no 
>> one will ever have time to implement even if scheduled for Xixao release.
>> 
>> To start the fun, I will share my silly ideas in the next email.
> 
> Here is my silly list of stuff to do.
> 
> - start adopting NeutronDbObject for core resources (ports, networks) [till 
> now, it’s used in QoS only];
> 
> - introduce a so called ‘core resource extender manager’ that would be able 
> to replace ml2 extension mechanism and become a plugin agnostic way of 
> extending core resources by additional plugins (think of port security or qos 
> available for ml2 only - that sucks!);
> 
> - more changes with less infra tinkering! neutron devs should not need to go 
> to infra projects so often to make an impact;
> -- make our little neat devstack plugin used for qos and sr-iov only a huge 
> pile of bash code that is currently stored in devstack and is proudly called 
> neutron-legacy now; and make the latter obsolete and eventually removed from 
> devstack;
> -- make tempest jobs use a gate hook as we already do for api jobs;
> 
> - qos:
> -- once we have gate hook triggered, finally introduce qos into tempest runs 
> to allow first qos scenarios merged;
> -- remove RPC upgrade tech debt that we left in L (that should open path for 
> new QoS rules that are currently blocked by it);
> -- look into races in rpc.callbacks notification pattern (Kevin mentioned he 
> had ideas in mind around that);
> 
> - oslo:
> -- kill the incubator: we have a single module consumed from there (cache); 
> Mitaka is the time for the witch to die in pain;
> -- adopt oslo.reports: that is something I failed to do in Liberty so that I 
> would have a great chance to do the same in Mitaka; basically, allow neutron 
> services to dump ‘useful info’ on SIGUSR2 sent; hopefully will make debugging 
> a bit easier;
> 
> - upgrades:
> -- we should return to partial job for neutron; it’s not ok our upgrade 
> strategy works by pure luck;
> -- overall, I feel that it’s needed to provide more details about how 
> upgrades are expected to work in OpenStack (the order of service upgrades; 
> constraints; managing RPC versions and deprecations; etc.) Probably devref 
> should be a good start. I talked to some nova folks involved in upgrades 
> there, and we may join the armies on that since general upgrade strategy 
> should be similar throughout the meta-project.
> 
> - stable:
> -- with a stadium of the size we have, it becomes a burden for 
> neutron-stable-maint to track backports for all projects; we should think of 
> opening doors for more per-sub-project stable cores for those subprojects 
> that seem sane in terms of development practices and stable awareness side; 
> that way we offload 

Re: [openstack-dev] [neutron] New cycle started. What are you up to, folks?

2015-10-05 Thread Carl Baldwin
(Cross-posting to the operators list for feedback)

Thank you Ihar for starting this up.  In the absence of any kind of
blog or other outlet of my own to disseminate this, let me share my
plans here...

Routed Networks:

My plans for Mitaka (and beyond) are around routed networks.  During
Liberty, I saw the request from operators in the form of an RFE [1]
(one of the first in the new RFE process I think).  The request
resonated with me because I had been thinking of doing this since
almost the day I started working on HP's cloud.  We went on to define
use cases around this to understand what operators are looking for
[2].  It seemed to me like we were on the right track.

I went to work and put up a spec which proposed using provider
networks for each of the network segments.  It used one more instance
of a Network for the routed network and a sort of provider router
which connected it to the provider Networks for the segments.  This
was a short cut and the community called me on it [3].  Despite some
existing leakage of L3 in to the supposedly L2 only Network model, the
community wants to keep Networks L2 only.  This makes a lot of sense
and I think we'll end up with a better long-term solution even if we
need to go through a little more pain to get there.  I've started a
new proposal [4] which adds two new L3 constructs to the model.  They
are RoutedNetworkGroup and SubnetGroup.

A RoutedNetworkGroup groups Networks.  It represents a bunch of L2
segments that are mutually reachable via L3.  In other words, there
are a bunch of L2 networks, each with its own subnet(s).  All of the
subnets of the other networks are reachable through the default routes
of each of the networks.

We will be able to create ports using a group as a hint instead of
specifying a Network explicitly.  To accomplish this, the proposal
adds a new mechanism to map hosts to Networks.  In our mid-cycle, we
talked through a flow where Neutron could filter host candidates given
a group as a hint based on IP availability on the underlying Networks.
Port creation would be passed the same group hint and a specific host
to create a new port.

My current understanding is that this will meet operators requirements
to allow users to choose where to boot a VM based on the L3 network so
that they don't have to be aware of the segments.  I'm looking to get
confirmation on this.  I've spoken to Kris Lindgren about this and it
seems promising.

A SubnetGroup groups subnets.  I don't think the Subnet model in
Neutron does a good job of representing L3.  First, floating ips
connected to Network instead of Subnet, but floating ips are really an
L3 thing!  Second, an L3 network often has more than just one cidr.
With ipv4 specifically, it is a collection of cidrs together that is
important.  The current model doesn't allow a group of subnets without
a Network.  The SubnetGroup object fills this gap.  It groups Subnets;
floating IPs will hang off of these instead of Networks; and they can
be owned by Networks or RoutedNetworkGroups.  One day, they will also
stand alone to represent a pure L3-only network (e.g. Calico).

This new model will allow Neutron routers to connect to an L3 network,
host floating IPs from the L3 network, and even allow routing directly
to tenant networks without NAT.

BGP:

The BGP work is being led by Ryan Tidwell and Vikram Choudhary and is
discussed weekly in the L3 team meeting [5].  I plan to continue
giving my full support to this effort as it aligns very nicely with
the routed networks work.

Connecting Neutron routers to an L3 network needs one more thing to
work.  The L3 network needs to know the next hop address for any
routes behind them.  This includes floating ips and tenant networks.
This can be accomplished by injecting routes in to the routers, a
dynamic routing protocol, proxy arp, or whatever else.  The BGP work
being done aims to fill this gap.  In short, it turns Neutron in to a
BGP speaker to the L3 network.

Floating IPs:

This is mostly future work.  In Neutron today, a floating IP is an IP
whose traffic goes to a Neutron router and is a 1-1 NAT to a private
IP on the tenant network.  I'd like to generalize this concept.  For
example, DNAT could be done per TCP/UDP port to different internal
ports/addresses.  Gal Sagie has posted a spec something like this [6].

Who says floating ips have to mean NAT?  I've seen how one operator
does floating ips by instructing the VM to accept the floating IP
address as one of its own.  We could also do NAT at the port on the
compute host if the VM instance doesn't understand what to do with it.
The point is, why does the router have to be involved, especially if
we now have routed networks and BGP to work with?

We could also route more than one address to an instance or even whole
subnets.  Floating subnets, heh.  This might be useful for containers
running inside of instances and I actually talked to a few people at
LinuxCon who would like to see this happen.

There is 

Re: [openstack-dev] [neutron] New cycle started. What are you up to, folks?

2015-10-05 Thread Rossella Sblendido
Very nice thread Ihar!

Here are my plans:

1. Get the last patches of the blueprint restructure-l2-agent  merged
and keep working on improving the agent. Some code refactor is
definitely needed and I'd like to add multiple workers.

2. Introducing oslo versioned objects

3. Make it easier to get started in Neutron.
   - mentor people
   - write docs/blog posts
   - in general simplify the current code when possible

4. Getting some performance data and store them for future reference


cheers,

Rossella

On Mon, 2015-10-05 at 18:29 +0900, Hirofumi Ichihara wrote:
> Hi,
> 
> 
> I have some plans in Mitaka cycle.
> 
> 
> 1. AZ support[1]
>  - I proposed AZ support in Liberty but the millstone is Mitaka now.
> The spec has been merged in Mitaka.
>I keep to propose the patches on Gerrit.
> 2. LinuxbrideDVR
>  - I'm trying to create concrete implementation and then I achieve it
> nearly. I will propose BP or RFE bug ASAP.
> 3. Router status update[2]
>  - I proposed it as bug[3] but some folks disagreed with this. I will
> classify the requirements and propose the implementation.
> 4. Devstack
>  - In Liberty cycle, I was aimed at removing plugin restriction from
> devstack so that vendor plugin maintainers easily keep to maintain the
> code in their tree.
>And also I tried to improve the quality for neutron. I’ve already
> finished some works.
>I will keep to contribute to devstack for neutron in Mitaka cycle
> including discussion about neutron repo vs devstack repo.
>If you have trouble with devstack related to neutron(especially
> devstack plugin), I can help you.
> 5. More
>  - I keep to contribute something else(especially logic for operators)
> for neutron :)
> 
> 
> [1]: https://blueprints.launchpad.net/neutron/+spec/add-availability-zone
> [2]: https://blueprints.launchpad.net/neutron/+spec/l3-router-status
> [3]: https://bugs.launchpad.net/neutron/+bug/1341290
> 
> 
> Thanks,
> Hirofumi
> 
> > On 2015/10/01, at 23:02, Ihar Hrachyshka 
> > wrote:
> > 
> > > On 01 Oct 2015, at 15:45, Ihar Hrachyshka 
> > > wrote:
> > > 
> > > Hi all,
> > > 
> > > I talked recently with several contributors about what each of us
> > > plans for the next cycle, and found it’s quite useful to share
> > > thoughts with others, because you have immediate yay/nay feedback,
> > > and maybe find companions for next adventures, and what not. So
> > > I’ve decided to ask everyone what you see the team and you
> > > personally doing the next cycle, for fun or profit.
> > > 
> > > That’s like a PTL nomination letter, but open to everyone! :) No
> > > commitments, no deadlines, just list random ideas you have in mind
> > > or in your todo lists, and we’ll all appreciate the huge pile of
> > > awesomeness no one will ever have time to implement even if
> > > scheduled for Xixao release.
> > > 
> > > To start the fun, I will share my silly ideas in the next email.
> > 
> > Here is my silly list of stuff to do.
> > 
> > - start adopting NeutronDbObject for core resources (ports,
> > networks) [till now, it’s used in QoS only];
> > 
> > - introduce a so called ‘core resource extender manager’ that would
> > be able to replace ml2 extension mechanism and become a plugin
> > agnostic way of extending core resources by additional plugins
> > (think of port security or qos available for ml2 only - that sucks!
> > );
> > 
> > - more changes with less infra tinkering! neutron devs should not
> > need to go to infra projects so often to make an impact;
> > -- make our little neat devstack plugin used for qos and sr-iov only
> > a huge pile of bash code that is currently stored in devstack and is
> > proudly called neutron-legacy now; and make the latter obsolete and
> > eventually removed from devstack;
> > -- make tempest jobs use a gate hook as we already do for api jobs;
> > 
> > - qos:
> > -- once we have gate hook triggered, finally introduce qos into
> > tempest runs to allow first qos scenarios merged;
> > -- remove RPC upgrade tech debt that we left in L (that should open
> > path for new QoS rules that are currently blocked by it);
> > -- look into races in rpc.callbacks notification pattern (Kevin
> > mentioned he had ideas in mind around that);
> > 
> > - oslo:
> > -- kill the incubator: we have a single module consumed from there
> > (cache); Mitaka is the time for the witch to die in pain;
> > -- adopt oslo.reports: that is something I failed to do in Liberty
> > so that I would have a great chance to do the same in Mitaka;
> > basically, allow neutron services to dump ‘useful info’ on SIGUSR2
> > sent; hopefully will make debugging a bit easier;
> > 
> > - upgrades:
> > -- we should return to partial job for neutron; it’s not ok our
> > upgrade strategy works by pure luck;
> > -- overall, I feel that it’s needed to provide more details about
> > how upgrades are expected to work in OpenStack (the order of service
> > upgrades; constraints; managing RPC versions and 

Re: [openstack-dev] [neutron] New cycle started. What are you up to, folks?

2015-10-03 Thread Doug Wiegley

> On Oct 1, 2015, at 8:59 AM, Ihar Hrachyshka  wrote:
> 
>> On 01 Oct 2015, at 17:42, Sean M. Collins  wrote:
>> 
>> On Thu, Oct 01, 2015 at 11:05:29AM EDT, Kyle Mestery wrote:
>>> On Thu, Oct 1, 2015 at 9:57 AM, Sean M. Collins  wrote:
>>> 
 On Thu, Oct 01, 2015 at 10:02:24AM EDT, Ihar Hrachyshka wrote:
> - more changes with less infra tinkering! neutron devs should not need
 to go to infra projects so often to make an impact;
> -- make our little neat DevStack plugin used for qos and sr-iov only a
 huge pile of bash code that is currently stored in DevStack and is proudly
 called neutron-legacy now; and make the latter obsolete and eventually
 removed from DevStack;
 
 We may need to discuss this. I am currently doing a refactor of the
 Neutron DevStack integration in
 
 https://review.openstack.org/168438
 
 If I understand your message correctly, I disagree that we should be
 moving all the DevStack support for Neutron out of DevStack and making
 it a plugin. All that does is move the mess from one corner of the room,
 to another corner.
 
 
>>> I would actually be in favor of cleaning up the mess AND moving it into
>>> neutron. If it's in Neutron, we control our own destiny with regards to
>>> landing patches which affect DevStack and ultimately our gate jobs. To me,
>>> that's a huge win-win. Thus, cleanup first, then move to Neutron.
>> 
>> Frankly we have a bad track record in DevStack, if we are to make an
>> argument about controlling our own destiny. Neutron-lib is in a sad
>> state of affairs because we haven't had the discipline to keep things
>> simple.
>> 
>> In fact, I think the whole genesis of the Neutron plugin for DevStack is
>> a great example of how controlling our own destiny has started to grow
>> the mess. Yes, we needed it to gate the QoS code. But now things are
>> starting to get added.
>> 
>> https://github.com/openstack/neutron/commit/bd07b74045d93c46483aa261b8686072d9b448e8
>> 
>> The trend is now that people are going to throw things into the Neutron
>> DevStack plugin to get their doo-dad up and running, because making a
>> new repo is harder than creating a patch (which maybe shows are repo
>> creation process needs streamlining). I was originally for making
>> Neutron DevStack plugins that exist in their own repos, instead of
>> putting them in the Neutron tree. At least that makes things small,
>> manageable, and straight forward. Yes, it makes for more plugin lines in
>> your DevStack configuration, but at least you know what each one does,
>> instead of being an agglomeration.
>> 
> 
> Scattering devstack plugins in separate repos that are far from the code that 
> they actually try to manage seems to me like a huge waste of time and 
> resources. Once a component is out of the tree, I agree their devstack pieces 
> should go away too. But while we keep QoS or SR-IOV in the tree, I think it’s 
> the right place to have all stuff related in.

This conversation should include the devstack folks, because they do have some 
concerns about splitting up too much, and thus making it harder to unwedge the 
gate. I lean towards all neutron devstack in the neutron repo myself, but 
that’s not a decision to be made in a vacuum.

What I’d like to see, given everything I’ve heard from everywhere, and just IMO:

- A *minimal* neutron support in native devstack, that basically provides the 
simplest nova-net/provider network/bare connectivity.
- All other neutron features that are in the neutron repo (tenant networking, 
qos, dvr), move into the neutron devstack plugin.
- Anything out of the neutron repo has its devstack support in its own repo 
(e.g. neutron-lbaas/devstack)

In other words, the devstack plugin lives in the repo that supports it.

Thanks,
doug


> 
>> If we are not careful, the Neutron DevStack plugin will grow into the big
>> mess that neutron-legacy is.
>> 
> 
> With your valuable reviewer comments, it has no way to come to such a pity 
> state. ;)
> 
>> Finally, Look at how many configuration knobs we have, and how there is
>> a tendency to introduce new ones, instead of using local.conf to inject
>> configuration into Neutron and the associated components. This ends up
>> making it very complicated for someone to actually run Neutron in their
>> DevStack, and I think a lot of people would give up and just run
>> Nova-Network, which I will note is *still the default*.
>> 
> 
> local.conf is fine but I believe we should still hide predefined sets of 
> configuration values that would define ‘roles’ like QoS or L3 or VPNaaS, 
> under ‘services’ (like q-qos or q-sriov).
> 
> I don’t believe the number of non-default knobs is the issue that bothers 
> people and make them use nova-network. The fact that default installation 
> does not set up networking properly is the issue though.
> 
>> We need to keep our ties strong with other 

Re: [openstack-dev] [neutron] New cycle started. What are you up to, folks?

2015-10-02 Thread Ben Pfaff
On Fri, Oct 02, 2015 at 08:19:47AM +0300, Gal Sagie wrote:
> *OVN*
> 
>1) OVN integration with Kuryr

Can you say anything more about what that entails?

Thanks,

Ben.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] New cycle started. What are you up to, folks?

2015-10-02 Thread Moshe Levi


> -Original Message-
> From: Sean M. Collins [mailto:s...@coreitpro.com]
> Sent: Thursday, October 01, 2015 6:42 PM
> To: OpenStack Development Mailing List (not for usage questions)
> <openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [neutron] New cycle started. What are you up
> to, folks?
> 
> On Thu, Oct 01, 2015 at 11:05:29AM EDT, Kyle Mestery wrote:
> > On Thu, Oct 1, 2015 at 9:57 AM, Sean M. Collins <s...@coreitpro.com>
> wrote:
> >
> > > On Thu, Oct 01, 2015 at 10:02:24AM EDT, Ihar Hrachyshka wrote:
> > > > - more changes with less infra tinkering! neutron devs should not
> > > > need
> > > to go to infra projects so often to make an impact;
> > > > -- make our little neat DevStack plugin used for qos and sr-iov
> > > > only a
> > > huge pile of bash code that is currently stored in DevStack and is
> > > proudly called neutron-legacy now; and make the latter obsolete and
> > > eventually removed from DevStack;
> > >
> > > We may need to discuss this. I am currently doing a refactor of the
> > > Neutron DevStack integration in
> > >
> > > https://review.openstack.org/168438
> > >
> > > If I understand your message correctly, I disagree that we should be
> > > moving all the DevStack support for Neutron out of DevStack and
> > > making it a plugin. All that does is move the mess from one corner
> > > of the room, to another corner.
> > >
> > >
> > I would actually be in favor of cleaning up the mess AND moving it
> > into neutron. If it's in Neutron, we control our own destiny with
> > regards to landing patches which affect DevStack and ultimately our
> > gate jobs. To me, that's a huge win-win. Thus, cleanup first, then move to
> Neutron.
> 
> Frankly we have a bad track record in DevStack, if we are to make an
> argument about controlling our own destiny. Neutron-lib is in a sad state of
> affairs because we haven't had the discipline to keep things simple.
> 
> In fact, I think the whole genesis of the Neutron plugin for DevStack is a 
> great
> example of how controlling our own destiny has started to grow the mess.
> Yes, we needed it to gate the QoS code. But now things are starting to get
> added.
> 
> https://github.com/openstack/neutron/commit/bd07b74045d93c46483aa26
> 1b8686072d9b448e8
> 
I think the decision  should be based on where is the core code located. 
So if SR-IOV, OVS ,Linux Bridge, Qos are still in the neutron core the neutron 
devstack plugin 
should know how to install them. If we will decide to move them to different 
repos the
their devstack part should be moved as well.


> The trend is now that people are going to throw things into the Neutron
> DevStack plugin to get their doo-dad up and running, because making a new
> repo is harder than creating a patch (which maybe shows are repo creation
> process needs streamlining). I was originally for making Neutron DevStack
> plugins that exist in their own repos, instead of putting them in the Neutron
> tree. At least that makes things small, manageable, and straight forward. Yes,
> it makes for more plugin lines in your DevStack configuration, but at least 
> you
> know what each one does, instead of being an agglomeration.
> 
> If we are not careful, the Neutron DevStack plugin will grow into the big mess
> that neutron-legacy is.
> 
> Finally, Look at how many configuration knobs we have, and how there is a
> tendency to introduce new ones, instead of using local.conf to inject
> configuration into Neutron and the associated components. This ends up
> making it very complicated for someone to actually run Neutron in their
> DevStack, and I think a lot of people would give up and just run Nova-
> Network, which I will note is *still the default*.
> 
> We need to keep our ties strong with other projects, and improve them in
> some cases. I think culturally, if we start trying to move things into our 
> corner
> of the sandbox because working with other groups is hard, we send bad
> signals to others. This will eventually come back to bite us.
> 
> /rant
> 
> --
> Sean M. Collins
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] New cycle started. What are you up to, folks?

2015-10-02 Thread Neil Jerram
On 01/10/15 14:47, Ihar Hrachyshka wrote:
> Hi all,
>
> I talked recently with several contributors about what each of us plans for 
> the next cycle, and found it’s quite useful to share thoughts with others, 
> because you have immediate yay/nay feedback, and maybe find companions for 
> next adventures, and what not. So I’ve decided to ask everyone what you see 
> the team and you personally doing the next cycle, for fun or profit.
>
> That’s like a PTL nomination letter, but open to everyone! :) No commitments, 
> no deadlines, just list random ideas you have in mind or in your todo lists, 
> and we’ll all appreciate the huge pile of awesomeness no one will ever have 
> time to implement even if scheduled for Xixao release.

:-)

My plans are centred around the model for routed networking, and Calico
as a particular implementation of that.  But I'd also like to continue
understanding Neutron more broadly and deeply, so that I can be more
generally helpful.

_Routed networking and Calico_

- Help Carl to define API and data model enhancements that describe
routed networking semantics.

- Help with the core Neutron implementation of that.

- Update networking-calico accordingly.

- Enhance pluggable IPAM so that IP allocation for an instance can
depend on where that instance's host is.  (This is relevant when the
Neutron network maps onto some relatively complex physical network, and
for route aggregation on fabric routers.)

_Wider Neutron_

- Better understand L3 initiatives like BGP dynamic routing.

- Better understand Neutron scaling.

- Contribute more across the board.


Hope that's useful!

Neil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] New cycle started. What are you up to, folks?

2015-10-02 Thread Miguel Angel Ajo



Moshe Levi wrote:

-Original Message-
From: Sean M. Collins [mailto:s...@coreitpro.com]
Sent: Thursday, October 01, 2015 6:42 PM
To: OpenStack Development Mailing List (not for usage questions)
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [neutron] New cycle started. What are you up
to, folks?

On Thu, Oct 01, 2015 at 11:05:29AM EDT, Kyle Mestery wrote:

On Thu, Oct 1, 2015 at 9:57 AM, Sean M. Collins<s...@coreitpro.com>

wrote:

On Thu, Oct 01, 2015 at 10:02:24AM EDT, Ihar Hrachyshka wrote:

- more changes with less infra tinkering! neutron devs should not
need

to go to infra projects so often to make an impact;

-- make our little neat DevStack plugin used for qos and sr-iov
only a

huge pile of bash code that is currently stored in DevStack and is
proudly called neutron-legacy now; and make the latter obsolete and
eventually removed from DevStack;

We may need to discuss this. I am currently doing a refactor of the
Neutron DevStack integration in

https://review.openstack.org/168438

If I understand your message correctly, I disagree that we should be
moving all the DevStack support for Neutron out of DevStack and
making it a plugin. All that does is move the mess from one corner
of the room, to another corner.



I would actually be in favor of cleaning up the mess AND moving it
into neutron. If it's in Neutron, we control our own destiny with
regards to landing patches which affect DevStack and ultimately our
gate jobs. To me, that's a huge win-win. Thus, cleanup first, then move to

Neutron.

Frankly we have a bad track record in DevStack, if we are to make an
argument about controlling our own destiny. Neutron-lib is in a sad state of
affairs because we haven't had the discipline to keep things simple.

In fact, I think the whole genesis of the Neutron plugin for DevStack is a great
example of how controlling our own destiny has started to grow the mess.
Yes, we needed it to gate the QoS code. But now things are starting to get
added.

https://github.com/openstack/neutron/commit/bd07b74045d93c46483aa26
1b8686072d9b448e8


I think the decision  should be based on where is the core code located.
So if SR-IOV, OVS ,Linux Bridge, Qos are still in the neutron core the neutron 
devstack plugin
should know how to install them. If we will decide to move them to different 
repos the
their devstack part should be moved as well.

That is correct. Eventually, we should either:

1) Move all neutron devstack code into a plugin
2) Or move the QoS bits back into devstack.

The decision to make QoS part of the core was because we're extending 
core resources, and our final aim
is to make it available to all plugins ( here's where the final core 
resource extension manager that Ihar

pointed out comes in place.)



The trend is now that people are going to throw things into the Neutron
DevStack plugin to get their doo-dad up and running, because making a new
repo is harder than creating a patch (which maybe shows are repo creation
process needs streamlining). I was originally for making Neutron DevStack
plugins that exist in their own repos,
Sincerely, Sean, IMHO it doesn't make any sense to create a repository 
uniquely for a devstack plugin

to enable a feature which is in the main repository. That's also broken.

Would you ask for a separate devstack plugin for l3  too?


instead of putting them in the Neutron
tree. At least that makes things small, manageable, and straight forward. Yes,
it makes for more plugin lines in your DevStack configuration, but at least you
know what each one does, instead of being an agglomeration.

If we are not careful, the Neutron DevStack plugin will grow into the big mess
that neutron-legacy is.


It's a good opportunity to refactor as we move, if "1" is a good 
strategy, otherwise, and if you think

neutron-legacy is a mess, let's work on cleaning it up while at "2".



Finally, Look at how many configuration knobs we have, and how there is a
tendency to introduce new ones, instead of using local.conf to inject
configuration into Neutron and the associated components. This ends up
making it very complicated for someone to actually run Neutron in their
DevStack, and I think a lot of people would give up and just run Nova-
Network, which I will note is *still the default*.


Hmm, I'm not sure I follow, so, if people need to tweak localrc in 
extremis, that's even going

to be more painful from the user/developer perspective.



We need to keep our ties strong with other projects, and improve them in
some cases. I think culturally, if we start trying to move things into our 
corner
of the sandbox because working with other groups is hard, we send bad
signals to others. This will eventually come back to bite us.

/rant

--
Sean M. Collins



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openst

Re: [openstack-dev] [neutron] New cycle started. What are you up to, folks?

2015-10-02 Thread Ben Pfaff
On Fri, Oct 02, 2015 at 10:07:32PM +0300, Gal Sagie wrote:
> If you are not familiar with Kuryr you can read my blog post about it here
> [1].
> Basically we already have a working demo integrating with Neutron and we
> are going to show
> it in OpenStack Tokyo (probably in the keynotes and in a specific Kuryr
> session).
> 
> For the OVN part, there are some areas that i want to work on:

Since it sounds like you're targeting a lot for Tokyo later this month,
I guess I can learn more then.  Thanks!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] New cycle started. What are you up to, folks?

2015-10-02 Thread Gal Sagie
Hi Ben,

If you are not familiar with Kuryr you can read my blog post about it here
[1].
Basically we already have a working demo integrating with Neutron and we
are going to show
it in OpenStack Tokyo (probably in the keynotes and in a specific Kuryr
session).

For the OVN part, there are some areas that i want to work on:

1) Exposing the ability in OVN to configure nested containers inside a VM
(thru a formal Neutron API)
(Which is also going to be used for the Magnum-Kuryr Integration as
this deployment is the main
 use case for Magnum)
2) Provide generic VIF binding layer for OVS (which can be used by OVN as
well) - This is already work in progress and will
be ready by the summit
3) Provide containerised OVN plugin image (Compatible with OpenStack Kolla
and Ansible) for deployment
4) VLAN allocation mechanism (for the nested containers)
5) As OVN mature, expose some of its features/applications, for example a
Distributed Load Balancer
can be used to replace Kubernetes services default LB implementation
(leveraging Neutron LBaaS API).
(And other Neutron features as they are implemented, for example
Neutron ports DNS resolution by name,
 tags to resources for pre-allocation of networks/ports for containers
and so on..)

If there are any people that want to join this effort from the OVN team,
you are always welcome.
I think this joint effort in the community is much better then every
project re-implementing this or
implementing specific Docker/Kubernetes integrations by their own.

[1]
http://galsagie.github.io/sdn/openstack/docker/kuryr/neutron/2015/08/24/kuryr-part1/

On Fri, Oct 2, 2015 at 3:50 PM, Ben Pfaff  wrote:

> On Fri, Oct 02, 2015 at 08:19:47AM +0300, Gal Sagie wrote:
> > *OVN*
> >
> >1) OVN integration with Kuryr
>
> Can you say anything more about what that entails?
>
> Thanks,
>
> Ben.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best Regards ,

The G.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] New cycle started. What are you up to, folks?

2015-10-01 Thread Sean M. Collins
On Thu, Oct 01, 2015 at 10:02:24AM EDT, Ihar Hrachyshka wrote:
> - more changes with less infra tinkering! neutron devs should not need to go 
> to infra projects so often to make an impact;
> -- make our little neat devstack plugin used for qos and sr-iov only a huge 
> pile of bash code that is currently stored in devstack and is proudly called 
> neutron-legacy now; and make the latter obsolete and eventually removed from 
> devstack;

We may need to discuss this. I am currently doing a refactor of the
Neutron DevStack integration in 

https://review.openstack.org/168438

If I understand your message correctly, I disagree that we should be
moving all the DevStack support for Neutron out of DevStack and making
it a plugin. All that does is move the mess from one corner of the room,
to another corner.

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] New cycle started. What are you up to, folks?

2015-10-01 Thread Armando M.
On 1 October 2015 at 06:45, Ihar Hrachyshka  wrote:

> Hi all,
>
> I talked recently with several contributors about what each of us plans
> for the next cycle, and found it’s quite useful to share thoughts with
> others, because you have immediate yay/nay feedback, and maybe find
> companions for next adventures, and what not. So I’ve decided to ask
> everyone what you see the team and you personally doing the next cycle, for
> fun or profit.
>
> That’s like a PTL nomination letter, but open to everyone! :) No
> commitments, no deadlines, just list random ideas you have in mind or in
> your todo lists, and we’ll all appreciate the huge pile of awesomeness no
> one will ever have time to implement even if scheduled for Xixao release.
>

You mean Xixao, once we have already rotated the alphabet? :)


>
> To start the fun, I will share my silly ideas in the next email.
>

Thanks for starting this thread. I think having people share ideas can be
useful to help us have a sense of what they are going to work on during the
next release. Obviously some of these ideas will eventually feed into
neutron-specs.

Kyle also tried to capture people's workload in [1] at one point. Perhaps
we can revisit that idea and develop it further based on your input here.

Definitely food for thought.

[1]
https://github.com/openstack/neutron-specs/blob/master/priorities/kilo-priorities.rst


>
> Ihar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] New cycle started. What are you up to, folks?

2015-10-01 Thread Ihar Hrachyshka
> On 01 Oct 2015, at 15:45, Ihar Hrachyshka  wrote:
> 
> Hi all,
> 
> I talked recently with several contributors about what each of us plans for 
> the next cycle, and found it’s quite useful to share thoughts with others, 
> because you have immediate yay/nay feedback, and maybe find companions for 
> next adventures, and what not. So I’ve decided to ask everyone what you see 
> the team and you personally doing the next cycle, for fun or profit.
> 
> That’s like a PTL nomination letter, but open to everyone! :) No commitments, 
> no deadlines, just list random ideas you have in mind or in your todo lists, 
> and we’ll all appreciate the huge pile of awesomeness no one will ever have 
> time to implement even if scheduled for Xixao release.
> 
> To start the fun, I will share my silly ideas in the next email.

Here is my silly list of stuff to do.

- start adopting NeutronDbObject for core resources (ports, networks) [till 
now, it’s used in QoS only];

- introduce a so called ‘core resource extender manager’ that would be able to 
replace ml2 extension mechanism and become a plugin agnostic way of extending 
core resources by additional plugins (think of port security or qos available 
for ml2 only - that sucks!);

- more changes with less infra tinkering! neutron devs should not need to go to 
infra projects so often to make an impact;
-- make our little neat devstack plugin used for qos and sr-iov only a huge 
pile of bash code that is currently stored in devstack and is proudly called 
neutron-legacy now; and make the latter obsolete and eventually removed from 
devstack;
-- make tempest jobs use a gate hook as we already do for api jobs;

- qos:
-- once we have gate hook triggered, finally introduce qos into tempest runs to 
allow first qos scenarios merged;
-- remove RPC upgrade tech debt that we left in L (that should open path for 
new QoS rules that are currently blocked by it);
-- look into races in rpc.callbacks notification pattern (Kevin mentioned he 
had ideas in mind around that);

- oslo:
-- kill the incubator: we have a single module consumed from there (cache); 
Mitaka is the time for the witch to die in pain;
-- adopt oslo.reports: that is something I failed to do in Liberty so that I 
would have a great chance to do the same in Mitaka; basically, allow neutron 
services to dump ‘useful info’ on SIGUSR2 sent; hopefully will make debugging a 
bit easier;

- upgrades:
-- we should return to partial job for neutron; it’s not ok our upgrade 
strategy works by pure luck;
-- overall, I feel that it’s needed to provide more details about how upgrades 
are expected to work in OpenStack (the order of service upgrades; constraints; 
managing RPC versions and deprecations; etc.) Probably devref should be a good 
start. I talked to some nova folks involved in upgrades there, and we may join 
the armies on that since general upgrade strategy should be similar throughout 
the meta-project.

- stable:
-- with a stadium of the size we have, it becomes a burden for 
neutron-stable-maint to track backports for all projects; we should think of 
opening doors for more per-sub-project stable cores for those subprojects that 
seem sane in terms of development practices and stable awareness side; that way 
we offload neutron-stable-maint folks for stuff with greater impact (aka stuff 
they actually know).

And what are you folks thinking of?

Ihar


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] New cycle started. What are you up to, folks?

2015-10-01 Thread Kyle Mestery
On Thu, Oct 1, 2015 at 9:57 AM, Sean M. Collins  wrote:

> On Thu, Oct 01, 2015 at 10:02:24AM EDT, Ihar Hrachyshka wrote:
> > - more changes with less infra tinkering! neutron devs should not need
> to go to infra projects so often to make an impact;
> > -- make our little neat devstack plugin used for qos and sr-iov only a
> huge pile of bash code that is currently stored in devstack and is proudly
> called neutron-legacy now; and make the latter obsolete and eventually
> removed from devstack;
>
> We may need to discuss this. I am currently doing a refactor of the
> Neutron DevStack integration in
>
> https://review.openstack.org/168438
>
> If I understand your message correctly, I disagree that we should be
> moving all the DevStack support for Neutron out of DevStack and making
> it a plugin. All that does is move the mess from one corner of the room,
> to another corner.
>
> I would actually be in favor of cleaning up the mess AND moving it into
neutron. If it's in Neutron, we control our own destiny with regards to
landing patches which affect devstack and ultimately our gate jobs. To me,
that's a huge win-win. Thus, cleanup first, then move to Neutron.


> --
> Sean M. Collins
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] New cycle started. What are you up to, folks?

2015-10-01 Thread Sean M. Collins
On Thu, Oct 01, 2015 at 11:05:29AM EDT, Kyle Mestery wrote:
> On Thu, Oct 1, 2015 at 9:57 AM, Sean M. Collins  wrote:
> 
> > On Thu, Oct 01, 2015 at 10:02:24AM EDT, Ihar Hrachyshka wrote:
> > > - more changes with less infra tinkering! neutron devs should not need
> > to go to infra projects so often to make an impact;
> > > -- make our little neat DevStack plugin used for qos and sr-iov only a
> > huge pile of bash code that is currently stored in DevStack and is proudly
> > called neutron-legacy now; and make the latter obsolete and eventually
> > removed from DevStack;
> >
> > We may need to discuss this. I am currently doing a refactor of the
> > Neutron DevStack integration in
> >
> > https://review.openstack.org/168438
> >
> > If I understand your message correctly, I disagree that we should be
> > moving all the DevStack support for Neutron out of DevStack and making
> > it a plugin. All that does is move the mess from one corner of the room,
> > to another corner.
> >
> >
> I would actually be in favor of cleaning up the mess AND moving it into
> neutron. If it's in Neutron, we control our own destiny with regards to
> landing patches which affect DevStack and ultimately our gate jobs. To me,
> that's a huge win-win. Thus, cleanup first, then move to Neutron.

Frankly we have a bad track record in DevStack, if we are to make an
argument about controlling our own destiny. Neutron-lib is in a sad
state of affairs because we haven't had the discipline to keep things
simple.

In fact, I think the whole genesis of the Neutron plugin for DevStack is
a great example of how controlling our own destiny has started to grow
the mess. Yes, we needed it to gate the QoS code. But now things are
starting to get added.

https://github.com/openstack/neutron/commit/bd07b74045d93c46483aa261b8686072d9b448e8

The trend is now that people are going to throw things into the Neutron
DevStack plugin to get their doo-dad up and running, because making a
new repo is harder than creating a patch (which maybe shows are repo
creation process needs streamlining). I was originally for making
Neutron DevStack plugins that exist in their own repos, instead of
putting them in the Neutron tree. At least that makes things small,
manageable, and straight forward. Yes, it makes for more plugin lines in
your DevStack configuration, but at least you know what each one does,
instead of being an agglomeration.

If we are not careful, the Neutron DevStack plugin will grow into the big
mess that neutron-legacy is.

Finally, Look at how many configuration knobs we have, and how there is
a tendency to introduce new ones, instead of using local.conf to inject
configuration into Neutron and the associated components. This ends up
making it very complicated for someone to actually run Neutron in their
DevStack, and I think a lot of people would give up and just run
Nova-Network, which I will note is *still the default*.

We need to keep our ties strong with other projects, and improve them in
some cases. I think culturally, if we start trying to move things into
our corner of the sandbox because working with other groups is hard, we
send bad signals to others. This will eventually come back to bite us.

/rant

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] New cycle started. What are you up to, folks?

2015-10-01 Thread Ihar Hrachyshka
> On 01 Oct 2015, at 17:05, Kyle Mestery  wrote:
> 
> On Thu, Oct 1, 2015 at 9:57 AM, Sean M. Collins  wrote:
> On Thu, Oct 01, 2015 at 10:02:24AM EDT, Ihar Hrachyshka wrote:
> > - more changes with less infra tinkering! neutron devs should not need to 
> > go to infra projects so often to make an impact;
> > -- make our little neat devstack plugin used for qos and sr-iov only a huge 
> > pile of bash code that is currently stored in devstack and is proudly 
> > called neutron-legacy now; and make the latter obsolete and eventually 
> > removed from devstack;
> 
> We may need to discuss this. I am currently doing a refactor of the
> Neutron DevStack integration in
> 
> https://review.openstack.org/168438
> 
> If I understand your message correctly, I disagree that we should be
> moving all the DevStack support for Neutron out of DevStack and making
> it a plugin. All that does is move the mess from one corner of the room,
> to another corner.
> 
> I would actually be in favor of cleaning up the mess AND moving it into 
> neutron. If it's in Neutron, we control our own destiny with regards to 
> landing patches which affect devstack and ultimately our gate jobs. To me, 
> that's a huge win-win. Thus, cleanup first, then move to Neutron.
> 

The idea is to make it *both* clean and under neutron team control. The latter 
has huge benefits for the team, allowing a quick turnaround on patches. We 
landed qos and sr-iov support in no time, and I believe the same should apply 
for all goods we want to ship. Ideally, devstack would only contain a single 
neutron related line that enables our plugin.

No more dependency on devstack core team, no more devstack folks burdened by 
random networking stuff they don’t really care that much. Ain’t it a win-win?

I agree we should not step on each other’s feet and I am ok to help with 
cleanup as Sean sees it (note: tell me more though about what is envisioned for 
the cleanup), or step out of it while Sean takes care of his stuff.

Ihar


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] New cycle started. What are you up to, folks?

2015-10-01 Thread Armando M.
On 1 October 2015 at 08:42, Sean M. Collins  wrote:

> On Thu, Oct 01, 2015 at 11:05:29AM EDT, Kyle Mestery wrote:
> > On Thu, Oct 1, 2015 at 9:57 AM, Sean M. Collins 
> wrote:
> >
> > > On Thu, Oct 01, 2015 at 10:02:24AM EDT, Ihar Hrachyshka wrote:
> > > > - more changes with less infra tinkering! neutron devs should not
> need
> > > to go to infra projects so often to make an impact;
> > > > -- make our little neat DevStack plugin used for qos and sr-iov only
> a
> > > huge pile of bash code that is currently stored in DevStack and is
> proudly
> > > called neutron-legacy now; and make the latter obsolete and eventually
> > > removed from DevStack;
> > >
> > > We may need to discuss this. I am currently doing a refactor of the
> > > Neutron DevStack integration in
> > >
> > > https://review.openstack.org/168438
> > >
> > > If I understand your message correctly, I disagree that we should be
> > > moving all the DevStack support for Neutron out of DevStack and making
> > > it a plugin. All that does is move the mess from one corner of the
> room,
> > > to another corner.
> > >
> > >
> > I would actually be in favor of cleaning up the mess AND moving it into
> > neutron. If it's in Neutron, we control our own destiny with regards to
> > landing patches which affect DevStack and ultimately our gate jobs. To
> me,
> > that's a huge win-win. Thus, cleanup first, then move to Neutron.
>
> Frankly we have a bad track record in DevStack, if we are to make an
> argument about controlling our own destiny. Neutron-lib is in a sad
> state of affairs because we haven't had the discipline to keep things
> simple.
>

IMO we can't make these statements otherwise what's the point in looking
forward if all we do is base our actions on some _indelible_ past?

As for the rest, I am gonna let this thread sink in a bit before I come up
with a more elaborate answer that this thread deserves.


>
> In fact, I think the whole genesis of the Neutron plugin for DevStack is
> a great example of how controlling our own destiny has started to grow
> the mess. Yes, we needed it to gate the QoS code. But now things are
> starting to get added.
>
>
> https://github.com/openstack/neutron/commit/bd07b74045d93c46483aa261b8686072d9b448e8
>
> The trend is now that people are going to throw things into the Neutron
> DevStack plugin to get their doo-dad up and running, because making a
> new repo is harder than creating a patch (which maybe shows are repo
> creation process needs streamlining). I was originally for making
> Neutron DevStack plugins that exist in their own repos, instead of
> putting them in the Neutron tree. At least that makes things small,
> manageable, and straight forward. Yes, it makes for more plugin lines in
> your DevStack configuration, but at least you know what each one does,
> instead of being an agglomeration.
>
> If we are not careful, the Neutron DevStack plugin will grow into the big
> mess that neutron-legacy is.
>
> Finally, Look at how many configuration knobs we have, and how there is
> a tendency to introduce new ones, instead of using local.conf to inject
> configuration into Neutron and the associated components. This ends up
> making it very complicated for someone to actually run Neutron in their
> DevStack, and I think a lot of people would give up and just run
> Nova-Network, which I will note is *still the default*.
>
> We need to keep our ties strong with other projects, and improve them in
> some cases. I think culturally, if we start trying to move things into
> our corner of the sandbox because working with other groups is hard, we
> send bad signals to others. This will eventually come back to bite us.
>
> /rant
>
> --
> Sean M. Collins
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] New cycle started. What are you up to, folks?

2015-10-01 Thread Ihar Hrachyshka
> On 01 Oct 2015, at 17:42, Sean M. Collins  wrote:
> 
> On Thu, Oct 01, 2015 at 11:05:29AM EDT, Kyle Mestery wrote:
>> On Thu, Oct 1, 2015 at 9:57 AM, Sean M. Collins  wrote:
>> 
>>> On Thu, Oct 01, 2015 at 10:02:24AM EDT, Ihar Hrachyshka wrote:
 - more changes with less infra tinkering! neutron devs should not need
>>> to go to infra projects so often to make an impact;
 -- make our little neat DevStack plugin used for qos and sr-iov only a
>>> huge pile of bash code that is currently stored in DevStack and is proudly
>>> called neutron-legacy now; and make the latter obsolete and eventually
>>> removed from DevStack;
>>> 
>>> We may need to discuss this. I am currently doing a refactor of the
>>> Neutron DevStack integration in
>>> 
>>> https://review.openstack.org/168438
>>> 
>>> If I understand your message correctly, I disagree that we should be
>>> moving all the DevStack support for Neutron out of DevStack and making
>>> it a plugin. All that does is move the mess from one corner of the room,
>>> to another corner.
>>> 
>>> 
>> I would actually be in favor of cleaning up the mess AND moving it into
>> neutron. If it's in Neutron, we control our own destiny with regards to
>> landing patches which affect DevStack and ultimately our gate jobs. To me,
>> that's a huge win-win. Thus, cleanup first, then move to Neutron.
> 
> Frankly we have a bad track record in DevStack, if we are to make an
> argument about controlling our own destiny. Neutron-lib is in a sad
> state of affairs because we haven't had the discipline to keep things
> simple.
> 
> In fact, I think the whole genesis of the Neutron plugin for DevStack is
> a great example of how controlling our own destiny has started to grow
> the mess. Yes, we needed it to gate the QoS code. But now things are
> starting to get added.
> 
> https://github.com/openstack/neutron/commit/bd07b74045d93c46483aa261b8686072d9b448e8
> 
> The trend is now that people are going to throw things into the Neutron
> DevStack plugin to get their doo-dad up and running, because making a
> new repo is harder than creating a patch (which maybe shows are repo
> creation process needs streamlining). I was originally for making
> Neutron DevStack plugins that exist in their own repos, instead of
> putting them in the Neutron tree. At least that makes things small,
> manageable, and straight forward. Yes, it makes for more plugin lines in
> your DevStack configuration, but at least you know what each one does,
> instead of being an agglomeration.
> 

Scattering devstack plugins in separate repos that are far from the code that 
they actually try to manage seems to me like a huge waste of time and 
resources. Once a component is out of the tree, I agree their devstack pieces 
should go away too. But while we keep QoS or SR-IOV in the tree, I think it’s 
the right place to have all stuff related in.

> If we are not careful, the Neutron DevStack plugin will grow into the big
> mess that neutron-legacy is.
> 

With your valuable reviewer comments, it has no way to come to such a pity 
state. ;)

> Finally, Look at how many configuration knobs we have, and how there is
> a tendency to introduce new ones, instead of using local.conf to inject
> configuration into Neutron and the associated components. This ends up
> making it very complicated for someone to actually run Neutron in their
> DevStack, and I think a lot of people would give up and just run
> Nova-Network, which I will note is *still the default*.
> 

local.conf is fine but I believe we should still hide predefined sets of 
configuration values that would define ‘roles’ like QoS or L3 or VPNaaS, under 
‘services’ (like q-qos or q-sriov).

I don’t believe the number of non-default knobs is the issue that bothers 
people and make them use nova-network. The fact that default installation does 
not set up networking properly is the issue though.

> We need to keep our ties strong with other projects, and improve them in
> some cases. I think culturally, if we start trying to move things into
> our corner of the sandbox because working with other groups is hard, we
> send bad signals to others. This will eventually come back to bite us.

Well, it seems that the general trend in -dev projects like devstack or grenade 
is to give projects a plugin interface and then push them into adopting their 
pieces of code thru that interface. I will merely note that for QoS, the 
initial idea was to introduce q-qos service into devstack, but devstack core 
team (reasonably) pushed us into plugin world. They forbid new features in 
grenade for the same reason, so that we have motivation to move out of tree.

Ihar


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [neutron] New cycle started. What are you up to, folks?

2015-10-01 Thread Ryan Moats
(With apologies to the Who)...

"Meet the new things, same as the old things"

DVR - let's make it real folks :)

Performance - I keep turning over rocks and finding things that just don't
make sense to me...

I suspect others will come a calling as we go...

Ryan Moats
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] New cycle started. What are you up to, folks?

2015-10-01 Thread Gal Sagie
This is going to be busy cycle for me, but with many exciting and
interesting topics:

*Kuryr*

Kuryr is starting to gain interest from the community and going to be
demoed and presented in OpenStack
Tokyo, i think all this positive feedback means we are on the right track
and we have a very busy
road map for Mitaka.

   1) Kuryr spec in Neutron repository [1] is accepted by most of the
Magnum team and Kuryr team,
   We are planning to demo the first milestone in Tokyo
   2) Integration with Magnum and support of the nested containers inside
VM cases
   3) Containerised Neutron plugins using Kolla
   4) Finalising the generic VIF binding layer for OVS (reference
implementation) which also can be used by OVN
   5) Testing and stability
   6) Kubernetes Integration

*Neutron*

   1) Add tags to Neutron resources spec [2]  (hopefully gets accepted by
the community)
   2) Port forwarding in Neutron [3] (still needs work to finalise design
details and spec), this is in my view
   a very needed feature which one of its use cases is also to support
Docker port mapping for Kuryr
   3) Mentoring program - i would like to present an initiative which will
help both new contributors and
   experienced members find each other and be able to delegate tasks
due to mutual interest.
   I hope this will help both sides in the long run and will simplify
this process for both sides.

*Dragonflow*

Dragonflow Liberty release is already working as a distributed control
plane across all compute nodes
and implements L2, distributed L3 and distributed DHCP.
It has a pluggable DB layer which already support different backends (etcd,
RethinkDB, OVSDB and RAMCloud)
and the process of integrating new DB framework is very simple and easy.
Dragonflow aims at solving some of the known scale/performance/HA/latency
problems in today's SDN.

   1) There are some new people joining and starting to work on the project
(some remotely)
which means i will have to do some coaching and helping of new
contributors.
and always welcome new people that are interested with this project.
   2) Pluggable DB layer currently support full pro-activeness (all data is
synced with all nodes)
   In the next cycle we plan to support selective pro-activeness
(meaning each node
   will only be synced with the relevant data depending on the virtual
network topology),
   and proactive-reactive approach with local cache, which means not
all data is fully distributed
   but is queried on demand and cached in the local node.
   3) Support missing Neutron API (Provider networks)
   4) Extendability - We have some interesting ideas in regards to how
external applications
   or network functions can control parts of Dragonflow pipeline
without code changes and
  implement them in a distributed manner. (using OpenFlow)
   5) Smart NICS - leveraging HW offloading and current acceleration
techniques with adjusted
   Dragonflow pipeline (we already have some partnerships going on in
this area)
   6) Distributed SNAT/DNAT
   7) Scale/performance/testing - using projects like stackforge/shaker and
deploy Dragonflow
on large scale deployment

*OVN*

   1) OVN integration with Kuryr
   2) L3 in OVN
   3) Fault detection and management/monitoring in OVN
   4) Testing and picking up on tasks
   5) Hopefully trying to share some of the Dragonflow concepts (Pluggable
DB/ Extensibility)
   to OVN

*Blogging *

Sharing information and status in my blog [4] is very important in my view
and i hope to continue
doing so for all the above topics, hopefully providing more visibility and
help to the community
for the above projects.

[1] https://review.openstack.org/#/c/213490/
[2] https://review.openstack.org/#/c/216021/
[3] https://review.openstack.org/#/c/224727/
[4] http://galsagie.github.io


On Thu, Oct 1, 2015 at 10:32 PM, Ryan Moats  wrote:

> (With apologies to the Who)...
>
> "Meet the new things, same as the old things"
>
> DVR - let's make it real folks :)
>
> Performance - I keep turning over rocks and finding things that just don't
> make sense to me...
>
> I suspect others will come a calling as we go...
>
> Ryan Moats
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards ,

The G.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev