Re: [openstack-dev] [neutron][networking-sfc] Proposing Mohan Kumar for networking-sfc core
+1 On 6/13/2016 14:35, Cathy Zhang wrote: Mohan has been working on networking-sfc project for over one year. He is a key contributor to the design/coding/testing of SFC CLI, SFC Horizon, as well as ONOS controller support for SFC functionality. He has been great at helping out with bug fixes, testing, and reviews of all components of networking-sfc. He is also actively providing guidance to the users on their SFC setup, testing, and usage. Mohan showed a very good understanding of the networking-sfc design, code base, and its usage scenarios. Networking-sfc could use more cores as our user base and features have grown and I think he'd be a valuable addition. Please respond with your +1 votes to approve this change or -1 votes to oppose. Thanks, Cathy __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][vpnaas]Question about MPLS VPN
On 5/26/2016 02:50, zhangyali (D) wrote: I am interested in the VPNaaS project in Neutron. Now I notice that only IPsec tunnel has completed, but other types of VPN, such as, MPLS/BGP, have not completed. I'd like to know how's going about MPLS/BGP vpn? What's the mechanism or extra work need to be done? For MPLS/BGP VPNs refer to the networking-bgpvpn project rather than VPNaaS. http://docs.openstack.org/developer/networking-bgpvpn __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] support of NSH in networking-SFC
On 5/25/2016 13:24, Tim Rozet wrote: In my opinion, it is a better approach to break this down into plugin vs driver support. There should be no problem adding support into networking-sfc plugin for NSH today. The OVS driver however, depends on OVS as the dataplane - which I can see a solid argument for only supporting an official version with a non-NSH solution. The plugin side should have no dependency on OVS. Therefore if we add NSH SFC support to an ODL driver in networking-odl, and use that as our networking-sfc driver, the argument about OVS goes away (since neutron/networking-sfc is totally unaware of the dataplane at this point). We would just need to ensure that API calls to networking-sfc specifying NSH port pairs returned error if the enabled driver was OVS (until official OVS with NSH support is released). Does ODL have a dataplane? I thought it used OvS. Is the ODL project supporting its own fork of OvS that has NSH support or is ODL expecting that the user will patch OvS themself? I don't know the details of why OvS hasn't added NSH support so I can't judge the validity of the concerns, but one way or another there has to be a production-quality dataplane for networking-sfc to front-end. If ODL has forked OvS or in some other manner is supporting its own NSH capable dataplane then it's reasonable to consider that the ODL driver could be the first networking-sfc driver to support NSH. However, we still need to make sure that the API is an abstraction, not implementation specific. But if ODL is not supporting its own NSH capable dataplane, instead expecting the user to run a patched OvS that doesn't have upstream acceptance then I think we would be building a rickety tower by piling networking-sfc on top of that unstable base. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] support of NSH in networking-SFC
On Fri, 13 May 2016 17:13:59 -0700 "Armando M." wrote: > On 13 May 2016 at 16:10, Elzur, Uri wrote: > > > Hi Cathy > > > > > > > > Thank you for the quick response. This is the essence of my > > question – does Neutron keep OvS as a gold standard and why > > > > Not at all true. Neutron, the open source implementation, uses a > variety of open components, OVS being one of them. If you know of any > open component that supports NSH readily available today, I'd be > happy to hear about it. I agree with Armando and Cathy. There's nothing "gold standard" about OvS. The networking-sfc approach is to separate the API from the backend drivers and the OvS driver is only one of several. We have a place in the API where we expect to capture the tenant's intent to use NSH. What we don't currently have is a backend, OvS or other, that supports NSH. The actual dataplane forwarder is not part of networking-sfc. We aren't going to maintain the out-of-tree OvS NSH code or depend on it. When OvS accepts the NSH functionality upstream then our network-sfc driver will be able to make use of it. If any other vSwitch/vRouter that already supports NSH and if someone wants to write a networking-sfc driver for, that code would be welcome. We've also started discussing how to implement a capabilities discovery API so that if some backends support a capability (e.g. NSH) and other backends don't support it, we will provide the tenant with an abstract way to query the networking-sfc API in order to determine whether a particular capability can be provided by the current backend. The thing networking-sfc won't take on is ownership of the upstream dataplane forwarder projects. We'll simply provide an abstraction so that a common API can invoke SFC across pre-existing SFC-capable dataplanes. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS Agent extension for Newton cycle
SFC team and anybody else dealing with flow selection/classification (e.g. QoS), I just wanted to confirm that we're planning to meet in salon C today (Wednesday) to get lunch but then possibly move to a quieter location to discuss the common flow classifier ideas. On 4/21/2016 19:42, Cathy Zhang wrote: I like Malini’s suggestion on meeting for a lunch to get to know each other, then continue on Thursday. So let’s meet at "Salon C" for lunch from 12:30pm~1:50pm on Wednesday and then continue the discussion at Room 400 at 3:10pm Thursday. Since Salon C is a big room, I will put a sign “Common Flow Classifier and OVS Agent Extension” on the table. I have created an etherpad for the discussion. https://etherpad.openstack.org/p/Neutron-FC-OVSAgentExt-Austin-Summit __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] [networking-sfc] Network-sfc project f2f2 meet-up place and time
On 4/26/2016 00:35, Akihiro Motoki wrote: Hi Cathy and folks interested in SFC and classifers! Can't we use a different room like Salon D? Salon C is a lunch room and at that time there are no sessions in other rooms. It would be great if we can use Salon D or E (after looking at the first day's session) I think we can gather more easily and concentrate the discussion if we use some different space. Thought? Akihiro, Unless I've misunderstood the emails, the plan for Tuesday is a social lunch for the SFC team to get together. The plan for Wednesday is a working lunch to discuss flow classifiers in various projects and figure out how to converge on a single flow classifier API/model that can be shared by everything that needs to specify flows. If that's correct, then meeting in Salon C for lunch on Tuesday makes sense. For Wednesday we probably ought to grab boxed lunches and find a quiet room. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron][networking-sfc] API clarification questions
On 10/28/2015 04:14, Giuseppe (Pino) de Candia wrote: On Wed, Oct 28, 2015 at 4:20 PM, Russell Bryant mailto:rbry...@redhat.com>> wrote: I read through the proposed SFC API here: http://docs.openstack.org/developer/networking-sfc/api.html We have similarly been experimenting with a MidoNet implementation of the SFC API. Russell, Giuseppe, Have you looked any further at OVN/MidoNet implementations of the SFC API? It's been a long road to get the initial OvS implementation done and there are still some pieces that haven't been merged yet but we'd be very interested in feedback on the pluggable driver model. The intent has always been to create a common SFC API with an ML2-inspired separation of the API from the backend so that service chains can be created using the same API calls across multiple SDN controller backends. I'd like to capture feedback on the driver model and whether we've got the right bits in place to allow for hooking in various SDN controllers. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron][networking-sfc] Deploying current networking-sfc
On 11/16/2015 3:39 PM, Sean M. Collins wrote: Networking-sfc is still just a shell - there is no functional code. To be more clear, most of the code has not yet been merged. Click here for a link to all the pending reviews that contain lots of functional code targetted at merging into the networking-sfc repo: https://review.openstack.org/#/q/status:open+project:openstack/networking-sfc,n,z I believe Igor is aware of this and is trying to figure out how to pull all the necessary pending changes that are targetted at the networking-sfc repo but which haven't merged yet. This is a challenge. Personally, I haven't been able to get it all working yet. But we're in a dilemma because we want to get good reviews of the code before merging. Since the full functionality is quite a lot of code we wanted to chop it into more easily reviewable chunks. Unfortunately that makes it more difficult to pull it all in at once to test the whole thing prior to completing the review and merge. Either way, there's a whole lot of code. It just* needs to be reviewed. * Ok, "just" may downplay the amount of effort needed. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron][sfc] How could an L2 agent extension access agent methods ?
On 11/13/2015 7:03 PM, Henry Fourie wrote: I wonder whether just pushing flows into the existing tables at random points in time can be unstable and break the usual flow assumed by the main agent loop. LF> No not expect any issues. Am I making sense? [1] https://etherpad.openstack.org/p/l2-agent-extensions-api-expansion I attempted to describe a possible issue at the bottom of the Etherpad in the bullet point "Overall requirement - Flow prioritization mechanism" __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron][qos][fwaas] service groups vs. traffic classifiers
On 11/3/2015 1:03 PM, Sean M. Collins wrote: Anyway, the code is currently up on GitHub - I just threw it on there because I wanted to scratch my hacking itch quickly. https://github.com/sc68cal/neutron-classifier Sean, How much is needed to turn your models into something runnable to the extent of populating a database? I'm not really all that proficient with SQL Alchemy or SQL in general so I can't really visualize what the polymorphism statements in your model actually create. I'd like to create a few classifier rules and see what gets populated into the database and also to understand complicated of an SQL query is SQL Alchemy generating in order to reassemble each rule from its polymorphic representation in the database. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron][qos][fwaas] service groups vs. traffic classifiers
On 11/12/2015 3:50 PM, Ihar Hrachyshka wrote: All I am saying is that IF we merge some classifier API into neutron core and start using it for core, non-experimental, features, we cannot later move to some newer version of this API [that you will iterate on] without leaving a huge pile of compatibility code that would not exist in the first place if only we thought about proper API in advance. If that’s what you envision, fine; but I believe it will make adoption of the ‘evolving’ API a lot slower than it could otherwise be. I don't think I disagree at all. But we don't have a classifier API in neutron core (unless we consider security groups to be it) and I don't think anyone is saying that the classifier in networking-sfc should be merged straight into core as-is. In fact I think we're saying exactly the opposite, that *a* classifier will sit in networking-sfc, outside of core neutron, until *some* classifier is merged into core neutron. The point of networking-sfc isn't the classifier. A classifier is simply a prerequisite. So by all means let's work on defining and merging into core neutron a classifier that we can consider non-experimental and stable for all features to share and depend on, but we don't want SFC to be non-functional while we wait for that to happen. We can call the networking-sfc classifier experimental a slap a warning on that it'll be replaced with the core neutron classifier once such a thing has been implemented. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron][qos][fwaas] service groups vs. traffic classifiers
On 11/10/2015 8:30 AM, Sean M. Collins wrote: On Mon, Nov 09, 2015 at 07:58:34AM EST, Jay Pipes wrote: 2) Keep the security-group API as-is to keep outward compatibility with AWS. Create a single, new service-groups and service-group-rules API for L2 to L7 traffic classification using mostly the modeling that Sean has put together. Remove the networking-sfc repo and obselete the classifier spec. Not sure what should/would happen to the FWaaS API, frankly. As to the REST-ful API for creating classifiers, I don't know if it should reside in the networking-sfc project. It's a big enough piece that it will most likely need to be its own endpoint and repo, and have stakeholders from other projects, not just networking-sfc. That will take time and quite a bit of wrangling, so I'd like to defer that for a bit and just work on all the services having the same data model, where we can make changes quickly, since they are not visible to API consumers. I agree that the service classifier API should NOT reside in the networking-sfc project, but I don't understand why Jay suggests removing the networking-sfc repo. The classifier specified by networking-sfc is needed only because there isn't a pre-existing classifier API. As soon as we can converge on a common classifier API I am completely in favor of using it in place of the one in the networking-sfc repo, but SFC is more than just classifying traffic. We need a classifier in order to determine which traffic to redirect, but we also need the API to specify how to redirect the traffic that has been identified by classifiers. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron][sfc] How could an L2 agent extension access agent methods ?
On 11/9/2015 9:59 PM, Vikram Choudhary wrote: Hi Cathy, Could you please check on this. My mother passed away yesterday and I will be on leave for couple of weeks. I'm very sorry to hear that. Please take all the time you need. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [docs][networking-sfc]
Can someone from the Docs team take a look at why there isn't a docs URL for the networking-sfc repo? Compare [1] vs [2] The first URL appears to be a rendering of the docs/source/index.rst from the Neutron Git repo, but the second one gives a Not Found even though there is a docs/source/index.rst in the networking-sfc repo. If I've guessed the wrong URL, please let me know. I just guessed that replacing the name of the neutron repo in the URL with the name of the networking-sfc repo should have given me the right URL. Compare [3] vs [4] Both of these exist and as far as I can tell [1] is rendered from [3] and I would just naturally expect [2] to be rendered from [4] but it isn't. [1] http://docs.openstack.org/developer/neutron/ [2] http://docs.openstack.org/developer/networking-sfc/ [3] https://git.openstack.org/cgit/openstack/neutron/tree/doc/source/index.rst [4] https://git.openstack.org/cgit/openstack/networking-sfc/tree/doc/source/index.rst __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron][sfc] Neutron stadium distribution and/or packaging
It's possible that I've misunderstood "Big Tent/Stadium", but I thought we were talking about enhancements to Neutron, not separate unrelated projects. We have several efforts focused on adding capabilities to Neutron. This isn't about "polluting" the Neutron namespace but rather about adding capabilities that Neutron currently is missing. My concern is that we need to add to the Neutron API, the Neutron CLI, and enhance the capabilities of the OvS agent. I'm under the impression that the "Neutron Stadium" allows us to do this, but I'm fuzzy on the implementation details. Is the "Neutron Stadium" expected to allow additions to the Neutron API, the Neutron client, and the Neutron components such as ML2 and the OvS agent? __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [neutron][sfc] Neutron stadium distribution and/or packaging
Has anyone written anything up about expectations for how "Big Tent" or "Neutron Stadium" projects are expected to be installed/distributed/packaged? In particular, I'm wondering how we're supposed to handle changes to Neutron components. For the networking-sfc project we need to make additions to the API and corresponding additions to neutronclient as well as modifying the OvS agent to configure new flow table entries in OvS. The code is in a separate Git repo as is expected of a Stadium project but it doesn't make sense that we would package altered copies of files that are deployed by the regular Neutron packages. Should we be creating 99%+ of the functionality in filenames that don't conflict and then making changes to files in the Neutron and neutronclient repos to stitch together the 1% that adds our new functionality to the existing components? Or do we stage the code in the Stadium project's repo then subsequently request to merge it into the neutron/neutronclient repo? Or is there some other preferred way to integrate the added features? __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] till when must code land to make it in liberty
On 7/31/2015 9:47 AM, Kyle Mestery wrote: However, it's reasonable to assume the later you propose your RFE bug, the less of a chance it has of making it. We do enforce the Feature Freeze [2], which is the week of August 31 [3]. Thus, effectively you have 4 weeks to submit patches for new features. Does the feature freeze apply to "big tent" work? I certainly think we should try to stick as close to Neutron process as possible, but I'm wondering if we need to consider August 31 a hard deadline for the networking-sfc work. I suspect we won't be feature complete by the 31st, we will probably need to work well into September in order to ensure that we have something with all the necessary parts working. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][SFC]The proposed "Neutron API extension for packet forwarding" has a lot of duplication to the Neutron SFC API
On 7/27/2015 4:49 PM, Sean M. Collins wrote: I think when the API is too complex, where python-neutronclient is expected to create a better UX, that means that the API itself may need some further thinking and simplification. I think you are right however, that "Get me a network" is the first case where we've recognized that the workflow to create a tenant network and have internet connectivity is quite involved, and that there needs to be some more automation of the different steps. I don't think it's a matter of expecting python-neutronclient to provide a better UX for the full SFC API. It's more a matter of two different classes of user. One class of user has some fairly complex use cases that can't be satisfied with a hard coded "one true logic" behind a simple "do what I want" API. Another class of user doesn't need all that complexity and would like a single API call that does exactly the one use case they need without the flexibility of handling all the similar but not identical use cases. More input on ways to simplify the API [1] without reducing functionality is welcome, but that wasn't what my question was about. My question was, if there's one particular use case that's especially common, but it's essentially just a single use case out of a collection of similar use cases, is it reasonable to create an API and/or CLI "shortcut" that allows people who don't want or need the full range of use case to just get their common one? P.S. I'm not offering any opinion on whether [2] is or is not in fact common. I'm just saying that [2] appears to be a subset of [1] but [2] isn't sufficient to meet the needs of people who need [1]. Rather than implementing both [1] and [2] independently or forcing people who want [2] to use [1], I'm saying that it might be nice if we could provide something approximately equivalent to [2] using the implementation mechanics of [1]. [1] https://github.com/openstack/networking-sfc/blob/master/doc/source/api.rst [2] https://review.openstack.org/#/c/186663/ __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] The proposed "Neutron API extension for packet forwarding" has a lot of duplication to the Neutron SFC API
On 7/27/2015 5:20 PM, Anita Kuno wrote: I think you need to acknowledge in both email topic and in content that Sean tried to draw the fact that you are duplicating this work on July 16th. Collaboration is much more than "our meeting decided you shouldn't do your work". Perhaps taking a step back and acknowledging the work of others might set a nicer tone to your efforts. Anita, I think it might just be a matter of wording and limited bandwidth of written communication. I believe Cathy was doing exactly what you seem to be accusing her of not doing. Specifically, raising on the mailing list a topic of discussion that was covered during an IRC meeting and asking for input from anyone who may have an interest but who wasn't part of the IRC meeting. The SFC team isn't unilaterally vetoing [1], we simply seemed to reach a consensus that the two APIs are confusingly similar. Cathy's email was specifically to ask whether anyone else is opposed to the idea of using the more comprehensive API to perform the function of the less feature-full API. If there's any reason we've overlooked why [1] shouldn't be considered a use case within the broader SFC feature, then responses to Cathy's email are very welcome. That's why she said "Please let us know if you have different opinion." [1] https://review.openstack.org/#/c/186663/ __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][SFC]The proposed "Neutron API extension for packet forwarding" has a lot of duplication to the Neutron SFC API
On 7/24/2015 6:50 PM, Cathy Zhang wrote: Hi Everyone, In our last networking-sfc project IRC meeting, an issue was brought up that the API proposed in https://review.openstack.org/#/c/186663/ has a lot of duplication to the SFC API https://review.openstack.org/#/c/192933/ that is being currently implemented. In the IRC meeting, the project team reached consensus that we only need one API and the service chain API can cover the functionality needed by https://review.openstack.org/#/c/186663/. Please refer to the meeting log http://eavesdrop.openstack.org/meetings/service_chaining/2015/service_chaining.2015-07-23-17.02.log.html for more discussion info. Please let us know if you have different opinion. Thanks, I would, however, like input on the idea of CLI and API shortcuts. I don't think the API proposed in 186663 should be a completely separate implementation of creating flow table entries, but I can see the appeal of CLI options and perhaps API operations that allow the end user a quick and easy way of invoking the degenerate case without going through the multi-step, multi-api call execution of the full API. Is there a precedent for CLI options and/or single API calls that invoke a predefined multi-step path through a more comprehensive API? Perhaps the "Get me a network" work for example? It isn't very user friendly to force people to learn and navigate a complicated and comprehensive API if all they want to do is one simple and very common use case out of a myriad of possible and possibly esoteric applications of the full API. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Neutron][SFC] Launchpad cleanup
Is it possible to delete dead blueprints or at least change the section at the top to just provide a URL to the blueprint that supersedes it? Blueprint [1] links to a bunch of abandoned reviews and references its parent Blueprint [2] which also links to abandoned work. The current work on service chaining is taking place under [3] and [4]. [1] https://blueprints.launchpad.net/neutron/+spec/neutron-service-chaining [2] https://blueprints.launchpad.net/neutron/+spec/neutron-services-insertion-chaining-steering [3] https://blueprints.launchpad.net/neutron/+spec/openstack-service-chain-framework [4] https://blueprints.launchpad.net/neutron/+spec/neutron-api-extension-for-service-chaining __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][SFC] Wiki update - deleting old SFC API
On a general topic of wiki cleanup, what's the preferred mechanism for documenting APIs? Wiki page [1] largely duplicates the content of the spec in [2] I dislike duplication of information because it's likely to get out of sync. I'd rather use hyperlinks whenever possible. However, linking to a Gerrit review isn't the most end user friendly way of presenting an API. One option is to link to the GitHub version of the rendered RST file [3] but I'd like to know if there are any other preferred practices. [1] https://wiki.openstack.org/wiki/Neutron/APIForServiceChaining [2] https://review.openstack.org/#/c/192933/ [3] https://github.com/openstack/networking-sfc/blob/master/doc/source/api.rst __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][L3] Representing a networks connected by routers
On 7/23/2015 2:45 PM, Kevin Benton wrote: We ran in to this long ago. What are some other examples? We've be good about keeping the network L2 only. Segments, VLAN transparency, and other properties of the network are all L2. The example you gave about needing the network to see the grouping of subnets isn't the network leaking into L3, it's subnets requiring an L2 container. Networks don't depend on subnets, subnets depend on networks. I would rather look at making that dependency nullable and achieving your grouping another way (e.g. subnetpool). I think Kevin is right here. Network is fundamentally a layer 2 construct, it represents direct reachability. A network could in principle support non-IP traffic (though in practice that may or may not work depending on underlying implementation.) Subnet is fundamentally a layer 3 construct it represents addressing for traffic that may need to flow between different networks (quite literally, that's where the name *inter*net protocol comes from.) Because there is often a 1:1 relationship between network and subnet it's easy to blur the distinction, but I think it's worth keeping the concepts clear. An address scope or supernet (in the specific meaning of a summarized collection of subnets (e.g. a /23 made up of 8 /26s)) is a more accurate conceptual representation of multiple L2 segments with routing between them. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Neutron][SFC] Wiki update - deleting old SFC API
As a courtesy to anyone who may have worked on it, I wanted to notify the list that I'm going to delete [1] from wiki page [2]. I may actually delete [2] completely. I'm going to update the content on [3] to reference the new SFC API spec that has just been merged. Currently [3] links to [2] which links to [1]. [1] is a Google doc from 2013. If anybody who worked on it (or even anyone who didn't) would like to review [4] to see if we missed anything critical, your input is most welcome. This is part of a larger effort to improve search engine results because currently searches along the lines of Neutron service chaining are ranking abandoned specs and outdated documents higher than the current work that is being actively implemented. [1] https://docs.google.com/document/d/1fmCWpCxAN4g5txmCJVmBDt02GYew2kvyRsh0Wl3YF2U/edit [2] https://wiki.openstack.org/wiki/Neutron/ServiceInsertionAndChaining/API [3] https://wiki.openstack.org/wiki/Neutron/ServiceInsertionAndChaining [4] https://github.com/openstack/networking-sfc/blob/master/doc/source/api.rst __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Neutron] Flavor framework
What is the status of the flavor framework? Is this the right spec? https://blueprints.launchpad.net/neutron/+spec/neutron-flavor-framework I'm trying to sort through how the ML3 proposal https://review.openstack.org/#/c/105078/ fits in with requirements for high performance (high throughput, high packets per second, low latency) needed for workloads like VoIP, video and other NFV functions. It seems to be the consensus that a significant part of the intent of ML3 is "flavors for routers" and that makes it natural that ML3 should have a dependency on flavor framework but the blueprint referenced above doesn't have a series or milestone target. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] Service Chain project IRC meeting minutes - 06/04/2015
Cathy, Make sure to take note when Fall rolls around that "pacific time" is ambiguous. UTC does not observe daylight savings so a meeting at 1700UTC will be 10:00 PDT but 09:00 PST. On 6/4/2015 5:17 PM, Cathy Zhang wrote: Thanks for joining the service chaining meeting today! Sorry for the time confusion. We will correct the weekly meeting time to 1700UTC (10am pacific time) Thursday #openstack-meeting-4 on the OpenStack meeting page. Meeting Minutes: http://eavesdrop.openstack.org/meetings/service_chaining/2015/service_chaining.2015-06-04-16.59.html Meeting Minutes (text): http://eavesdrop.openstack.org/meetings/service_chaining/2015/service_chaining.2015-06-04-16.59.txt Meeting Log: http://eavesdrop.openstack.org/meetings/service_chaining/2015/service_chaining.2015-06-04-16.59.log.html The next meeting is scheduled for June 11 (same place and time). Thanks, Cathy __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] (RE: Change in openstack/neutron-specs[master]: Introducing Tap-as-a-Service)
On 2/24/2015 6:47 PM, Kevin Benton wrote: More seriously, have you considered starting a tap-as-a-service project on stackforge now that the services split has established a framework for advanced services? Uploading the code you are using to do it is a great way to get people motivated to try it, propose new features, critique it, etc. If you can't upload it because your approach would be proprietary, then would upstream support even be relevant? Right now we haven't written any code, but my concern is really more about standardizing the API. We're currently weighing two categories of options. One is to evaluate a number of open and closed source SDN software as plugins to Neutron. I'm not going to list names, but the candidates are represented in the plugins and ml2 subdirectories of Neutron. Many of these provide tap/mirror functionality, but since there's no standard Neutron API they we would be coding to a vendor specific API and have to call multiple different APIs to do the same thing if we deploy different ones in different locations over time. The other option that we've considered is to extend a piece of software we've written that currently has nothing to do with tap/mirror but does perform some OvS flow modifications. If we went with this route we certainly would consider open sourcing it, but right now this is the less likely plan B. It actually doesn't matter to me very much whether Neutron implements the tap functionality, but I'd really like to see a standard API call that the various SDN vendors could get behind. Right now it's possible to make Neutron API calls to manipulate networks, ports, and subnets and expect that they will function essentially the same way regardless of the underlying implementation from a variety of hardware and software vendors. But if we want to mirror a vSwitch port to an analyzer we have a myriad of vendor specific API calls that are entirely dependent on the underlying software and/or hardware beneath Neutron. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev