[openstack-dev] [Neutron] Simple proposal for stabilizing new features in-tree

2014-08-08 Thread Robert Kukura
[Note - I understand there are ongoing discussion that may lead to a 
proposal for an out-of-tree incubation process for new Neutron features. 
This is a complementary proposal that describes how our existing 
development process can be used to stabilize new features in-tree over 
the time frame of a release cycle or two. We should fully consider both 
proposals, and where each might apply. I hope something like the 
approach I propose here will allow the implementations of Neutron BPs 
with non-trivial APIs that have been targeted for the Juno release to be 
included in that release, used by early adopters, and stabilized as 
quickly as possible for general consumption.]


According to our existing development process, once a blueprint and 
associated specification for a new Neutron feature have been reviewed, 
approved, and targeted to a release, development proceeds, resulting in 
a series of patches to be reviewed and merged to the Neutron source 
tree. This source tree is then the basis for milestone releases and the 
final release for the cycle.


Ideally, this development process would conclude successfully, well in 
advance of the cycle's final release, and the resulting feature and its 
API would be considered fully "stable" in that release. Stable features 
are ready for widespread general deployment. Going forward, any further 
modifications to a stable API must be backwards-compatible with 
previously released versions. Upgrades must not lose any persistent 
state associated with stable features. Upgrade processes and their 
impact on a deployments (downtime, etc.) should be consistent for all 
stable features.


In reality, we developers are not perfect, and minor (or more 
significant) changes may be identified as necessary or highly desirable 
once early adopters of the new feature have had a chance to use it. 
These changes may be difficult or impossible to do in a way that honors 
the guarantees associated with stable features.


For new features that effect the "core" Neutron API and therefore impact 
all Neutron deployments, the stability requirement is strict. But for 
features that do not effect the core API, such as services whose 
deployment is optional, the stability requirement can be relaxed 
initially, allowing time for feedback from early adopters to be 
incorporated before declaring these APIs stable. The key in doing this 
is to manage the expectations of developers, packagers, operators, and 
end users regarding these new optional features while they stabilize.


I therefore propose that we manage these expectations, while new 
optional features in the source tree stabilize, by clearly labeling 
these features with the term "preview" until they are declared stable, 
and sufficiently isolating them so that they are not confused with 
stable features. The proposed guidelines would apply during development 
as the patches implementing the feature are first merged, in the initial 
release containing the feature, and in any subsequent releases that are 
necessary to fully stabilize the feature.


Here are my initial not-fully-baked ideas for how our current process 
can be adapted with a "preview feature" concept supporting in-tree 
stabilization of optional features:


* Preview features are implementations of blueprints that have been 
reviewed, approved, and targeted for a Neutron release. The process is 
intended for features for which there is a commitment to add the feature 
to Neutron, not for experimentation where "failing fast" is an 
acceptable outcome.


* Preview features must be optional to deploy, such as by configuring a 
service plugin or some set of drivers. Blueprint implementations whose 
deployment is not optional are not eligible to be treated as preview 
features.


* Patches implementing a preview feature are merged to the the master 
branch of the Neutron source tree. This makes them immediately available 
to all direct consumers of the source tree, such as developers, 
trunk-chasing operators, packagers, and evaluators or end-users that use 
DevStack, maximizing the opportunity to get the feedback that is 
essential to quickly stabilize the feature.


* The process for reviewing, approving and merging patches implementing 
preview features is exactly the same as for all other Neutron patches. 
The patches must meet HACKING standards, be production-quality code, 
have adequate test coverage, have DB migration scripts, etc., and 
require two +2s and a +A from Neutron core developers to merge.


* DB migrations for preview features are treated similarly to other DB 
migrations, forming a single ordered list that results in the current 
overall DB schema, including the schema for the preview feature. But DB 
migrations for a preview feature are not yet required to preserve 
existing persistent state in a deployment, as would be required for a 
stable feature.


* All code that is part of a preview feature is located under 
neutron/preview//. Associated unit te

Re: [openstack-dev] [Neutron] Simple proposal for stabilizing new features in-tree

2014-08-11 Thread Robert Kukura


On 8/11/14, 4:52 AM, Thierry Carrez wrote:

gustavo panizzo (gfa) wrote:

only one think i didn't like it

why all url,api, etc has to include the word 'preview'?
i imagine that i would be consuming the new feature using heat, puppet,
local scripts, custom horizon, whatever. Why do you make me to change
all them when the feature moves out of preview? it could be a lot of
rework (for consumers) without gain. I would totally support other big
fat warnings everywhere (logs, documentation, startup log of
neutron-server) but don't change the API if is not necessary

I see two issues with this proposal: the first one is what Gustavo just
said: the use of the "preview" package/configoption/API creates friction
when the feature needs to go mainstream (package renaming, change in
configuration for deployers, change in API calls for users...).

Hi Thierry,

I completely agree with you and with Gustavo that "mangling" the REST 
URIs to include "preview" may have more cost (i.e. friction when the API 
becomes stable) than benefit. I'm happy to drop that particular part of 
the proposal. The email was intended to kick off discussion of these 
sorts of details.


My understanding is that the goal is to make it easy for people to "try"
the preview feature, and keeping the experimental feature in-tree is
seen as simpler to experiment with. But the pain from this friction imho
outweighs the pain of deploying an out-of-tree plugin for deployers.
I agree out-of-tree is a better option for truly experimental features. 
This in-tree stabilization is intended for a beta release, as opposed to 
a prototype.


The second issue is that once the feature is in "preview" in tree, it
moves the responsibility/burden of making it official (or removed) to
the core developers (as Salvatore mentioned). I kind of like the
approach where experimental features are developed in faster iterations
out-of-tree and when they are celebrated by experimenters and are ready
to be stable-supported, they are moved in tree.
I don't think we are really disagreeing here. There are clearly 
situations where rapid iteration out-of-tree, without the burden of the 
core review process, is most appropriate. But this proposal is intended 
for features that are on the cusp of being declared stable, rather than 
for experimentation. The intent is absolutely to have all changes to the 
code go through the regular core review process during this 
stabilization phase. This enables the feature to be fully reviewed and 
integrated (also including CLIs, Horizon and Heat support, 
documentation, etc.) at the point when the decision is made that no 
further incompatible API changes will be needed. Once the feature is 
declared stable, from the end-user perspective, its just a matter of 
removing the "preview" label. Moving the feature's code from the preview 
subtree to its normal locations in the tree will not effect most users 
or operators.


Note that the GBP team had implemented a proof-of-concept prior to the 
start of the Juno cycle out-of-tree. Our initial plan was to get this 
PoC code reviewed and merged at the beginning of Juno, and then 
iteratively improve it throughout the cycle. But we got a lot of 
resistance to the idea of merging a large body of code that had been 
developed outside the Neutron development and review process. We've 
instead had to break it into multiple pieces, and make sure each of 
those is production ready, to have any chance of getting through the 
review process during Juno.  Its not really clear that something 
significant developed externally can ever be "moved in tree", at least 
without major upheaval, including incompatible API changes, as it goes 
through the review/merge process.


Finally, consider that many interesting potential features for Neutron 
involve integrations with external back-ends, such as ODL or 
vendor-specific devices or controllers, along with a reference 
implementation that doesn't depend on external systems. To really 
validate that the API, the model, and the driver framework code for a 
new feature are all stable, it is necessary to implement and deploy 
several of these back-end integrations along with the reference 
implementation. But vendors may not be willing to invest substantially 
in this integration effort without the assurances about the quality and 
relative stability of the interfaces involved that comes from the core 
review process, and without the clear path to market that comes with 
in-tree development of an approved blueprint targeted at a specific 
Neutron release.


-Bob


Regards,




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Simple proposal for stabilizing new features in-tree

2014-08-11 Thread Robert Kukura
s any kind of iterative development process. I think we should 
consider enforcing things like proven reliability, scalability, and 
usability at the point where the feature is promoted to stable rather 
than before merging the initial patch.
On the other hand we also need to avoid to over bureaucratise Neutron 
- nobody loves that - and therefore ensure this process is enforced 
only when really needed.


Looking at this proposal I see a few thing I'm not comfortable with:
- having no clear criterion for exclusion a feature might imply that 
will be silently bit-rotting code in the preview package. Which what 
would happen for instance if we end up with a badly maintained feature 
, but since one or two core devs care about it, they'll keep vetoing 
the removal
First, the feature will never be considered inclusion in the preview 
sub-tree without an initial approved blueprint and specification. 
Second, I suggest we automatically remove a feature from the preview 
tree after some small number of cycles, unless a new blueprint detailing 
what needs to be done to complete stabilization is approved.
- using the normal review process will still not solve the problem of 
slow review cycles, pointless downvotes for reviewers which actually 
just do not understand the subject matter, and other pains associated 
with lack of interest from small or large parts of the core team. For 
instance, I think there is a line of pretty annoyed contributors as we 
did not even bother reviewing their specs.
Agreed. But I'm hoping a clarified set of expectations for new features 
will allow implementation, review, and merging of code for approved 
blueprints to proceed incrementally, as is intended in our current 
process, which will build up the familiarization of the team with the 
new features as they are being developed.
- The current provision about QA seems to state that it's ok to keep 
code in the main repo that does not adhere to appropriate quality 
standards. Which is the mistake we did with lbaas and other features, 
and I would like to avoid. And to me it is not sufficient that the 
code is buried in the 'preview' package.
Lowering code quality standards is definitely no part of the intent. The 
preview code must be production-ready in order to be merged. Its API and 
data model are just not yet declared stable.
- Mostly important, this process provides a justification for 
contributors to push features which do not meet the same standards as 
other neutron parts and expect them to be merged and eventually 
promoted, and on the other hand provides the core team with the 
entitlement for merging them - therefore my main concern that it does 
not encourages better behaviour in people, which should be the 
ultimate aim of a process change.
I'm really confused by this interpretation of my proposal. Preview code 
patches must go through the normal review process. Each individual patch 
must meet all our normal standards. And the system-level quality 
attributes of the feature must be proven as well for the feature to be 
declared stable. But you can't prove these system level attributes until 
you get the code into the hands of early adopters and incorporate their 
feedback. Our current process can facilitate this, as long as we set 
expectations properly during this stabilization phase.


If you managed to read through all of this, and tolerated my dorky 
literature references, I really appreciate your patience, and would 
like to conclude that here we're discussing proposals for a process 
change, whereas I expect to discuss in the next neutron meeting the 
following:

- whether is acceptable to change the process now
- what did go wrong in our spec review process, as we ended up with at 
least an approved spec which is actually fiercely opposed by other 
core team members.
These discussions need to happen. I don't think my proposal should be 
looked at as a major process change, but rather as a clarification of 
how our current process explicitly supports iterative development and 
stabilization of new features. It can be applied to several of the new 
features targeted for Juno. Whether there is actual opposition to the 
inclusion of any of these is a separate matter, but some clarity about 
exactly what inclusion would mean can't hurt that discussion.


Thanks for your indulgence as well,

-Bob


Have a good weekend,
Salvatore

[1] Quote from "Il Gattopardo" by Giuseppe Tomasi di Lampedusa 
(english name: The Leopard)



On 8 August 2014 22:21, Robert Kukura <mailto:kuk...@noironetworks.com>> wrote:



[Note - I understand there are ongoing discussion that may lead to
a proposal for an out-of-tree incubation process for new Neutron
features. This is a complementary proposal that describes how our
existing development process can be used to stabilize new features
in-tree over the time frame of a release cycle or two. We shou

Re: [openstack-dev] [neutron][ml2] Mech driver as out-of-tree add-on

2014-08-13 Thread Robert Kukura
One thing to keep in mind is that the ML2 driver API does sometimes 
change, requiring updates to drivers. Drivers that are in-tree get 
updated along with the driver API change. Drivers that are out-of-tree 
must be updated by the owner.


-Bob

On 8/13/14, 6:59 AM, ZZelle wrote:

Hi,


The important thing to understand is how to integrate with neutron 
through stevedore/entrypoints:


https://github.com/dave-tucker/odl-neutron-drivers/blob/master/setup.cfg#L32-L34


Cedric


On Wed, Aug 13, 2014 at 12:17 PM, Dave Tucker > wrote:


I've been working on this for OpenDaylight
https://github.com/dave-tucker/odl-neutron-drivers

This seems to work for me (tested Devstack w/ML2) but YMMV.

-- Dave

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][QoS] Request to be considered for neutron-incubator

2014-08-19 Thread Robert Kukura
Actually, whether the incubator is involved for not, this might be a 
great candidate for implementation using an ML2 extension driver. See 
https://review.openstack.org/#/c/89211/ for the code under review for 
Juno, and also 
https://docs.google.com/a/noironetworks.com/document/d/14T-defRnFl6M2xR5ZNFGYD6aIiAVWdd1Al9BnjV_JOs 
for planned followup work that would enforce extension semantics during 
port binding.


-Bob

On 8/19/14, 2:46 PM, Kevin Benton wrote:

+1.

This work in particular brings up a question about the incubator. One 
of the rules was that the neutron core code can't import code from the 
incubated projects. The QoS requires a mixin to annotate the port and 
network objects with QoS settings. How exactly would we actually use 
the QoS code from the incubator since we can't import the mixins in 
the ML2 plugin?



On Tue, Aug 19, 2014 at 11:33 AM, Collins, Sean 
> wrote:


Hi,

The QoS API extension has lived in Gerrit/been in review for about a
year. It's gone through revisions, summit design sessions, and for a
little while, a subteam.

I would like to request incubation in the upcoming incubator, so that
the code will have a more permanent "home" where we can
collaborate and
improve.
--
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Kevin Benton


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Incubator concerns from packaging perspective

2014-09-01 Thread Robert Kukura
Sure, Horizon (or Heat) support is not always required for new features 
entering incubation, but when a goal in "incubating" a feature is to get 
it packaged with OpenStack distributions and into the hands of as many 
early adopters as possible to gather feedback, these integrations are 
very important.


-Bob

On 9/1/14, 9:05 AM, joehuang wrote:

Hello,

Not all features which had already been shipped in Neutron supported by 
Horizon. For example, multi provider network.

This is not the special case only happened in Neutron. For example, Glance 
delivered V2 api in IceHouse or even early and support Image muti-locations 
feature, but this feature also not available from Horizon.

Fortunately, the CLI/python client can give us the opportunity to use this 
powerful feature.

So, It's not necessary to link Neutron incubation with Horizon tightly. The 
feature implemented in Horizon can be introduced when the the incubation 
graduate.

Best regards.

Chaoyi Huang ( joehuang )


发件人: Maru Newby [ma...@redhat.com]
发送时间: 2014年9月1日 17:53
收件人: OpenStack Development Mailing List (not for usage questions)
主题: Re: [openstack-dev] [neutron] Incubator concerns from packaging 
perspective

On Aug 26, 2014, at 5:06 PM, Pradeep Kilambi (pkilambi)  
wrote:



On 8/26/14, 4:49 AM, "Maru Newby"  wrote:


On Aug 25, 2014, at 4:39 PM, Pradeep Kilambi (pkilambi)
 wrote:



On 8/23/14, 5:36 PM, "Maru Newby"  wrote:


On Aug 23, 2014, at 4:06 AM, Sumit Naiksatam 
wrote:


On Thu, Aug 21, 2014 at 7:28 AM, Kyle Mestery 
wrote:

On Thu, Aug 21, 2014 at 5:12 AM, Ihar Hrachyshka

wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 20/08/14 18:28, Salvatore Orlando wrote:

Some comments inline.

Salvatore

On 20 August 2014 17:38, Ihar Hrachyshka mailto:ihrac...@redhat.com>> wrote:

Hi all,

I've read the proposal for incubator as described at [1], and I
have several comments/concerns/suggestions to this.

Overall, the idea of giving some space for experimentation that
does not alienate parts of community from Neutron is good. In that
way, we may relax review rules and quicken turnaround for preview
features without loosing control on those features too much.

Though the way it's to be implemented leaves several concerns, as
follows:

1. From packaging perspective, having a separate repository and
tarballs seems not optimal. As a packager, I would better deal with
a single tarball instead of two. Meaning, it would be better to
keep the code in the same tree.

I know that we're afraid of shipping the code for which some users
may expect the usual level of support and stability and
compatibility. This can be solved by making it explicit that the
incubated code is unsupported and used on your user's risk. 1) The
experimental code wouldn't probably be installed unless explicitly
requested, and 2) it would be put in a separate namespace (like
'preview', 'experimental', or 'staging', as the call it in Linux
kernel world [2]).

This would facilitate keeping commit history instead of loosing it
during graduation.

Yes, I know that people don't like to be called experimental or
preview or incubator... And maybe neutron-labs repo sounds more
appealing than an 'experimental' subtree in the core project.
Well, there are lots of EXPERIMENTAL features in Linux kernel that
we actively use (for example, btrfs is still considered
experimental by Linux kernel devs, while being exposed as a
supported option to RHEL7 users), so I don't see how that naming
concern is significant.



I think this is the whole point of the discussion around the
incubator and the reason for which, to the best of my knowledge,
no proposal has been accepted yet.

I wonder where discussion around the proposal is running. Is it
public?


The discussion started out privately as the incubation proposal was
put together, but it's now on the mailing list, in person, and in IRC
meetings. Lets keep the discussion going on list now.


In the spirit of keeping the discussion going, I think we probably
need to iterate in practice on this idea a little bit before we can
crystallize on the policy and process for this new repo. Here are few
ideas on how we can start this iteration:

* Namespace for the new repo:
Should this be in the neutron namespace, or a completely different
namespace like "neutron labs"? Perhaps creating a separate namespace
will help the packagers to avoid issues of conflicting package owners
of the namespace.

I don¹t think there is a technical requirement to choose a new
namespace.
Python supports sharing a namespace, and packaging can support this
feature (see: oslo.*).


 From what I understand there can be overlapping code between neutron and
incubator to override/modify existing python/config files. In which
case,
packaging(for Eg: rpm) will raise a path conflict. So we probably will
need to worry about namespaces?

Doug's suggestion to use a separate namespace to indicate that the
incubator codebase isn’t full

Re: [openstack-dev] [neutron] Status of Neutron at Juno-3

2014-09-05 Thread Robert Kukura

Kyle,

Please consider an FFE for 
https://blueprints.launchpad.net/neutron/+spec/ml2-hierarchical-port-binding. 
This was discussed extensively at Wednesday's ML2 meeting, where the 
consensus was that it would be valuable to get this into Juno if 
possible. The patches have had core reviews from Armando, Akihiro, and 
yourself. Updates to the three patches addressing the remaining review 
issues will be posted today, along with an update to the spec to bring 
it in line with the implementation.


-Bob

On 9/3/14, 8:17 AM, Kyle Mestery wrote:

Given how deep the merge queue is (146 currently), we've effectively
reached feature freeze in Neutron now (likely other projects as well).
So this morning I'm going to go through and remove BPs from Juno which
did not make the merge window. I'll also be putting temporary -2s in
the patches to ensure they don't slip in as well. I'm looking at FFEs
for the high priority items which are close but didn't quite make it:

https://blueprints.launchpad.net/neutron/+spec/l3-high-availability
https://blueprints.launchpad.net/neutron/+spec/add-ipset-to-security
https://blueprints.launchpad.net/neutron/+spec/security-group-rules-for-devices-rpc-call-refactor

Thanks,
Kyle

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][policy] Group-based Policy next steps

2014-09-10 Thread Robert Kukura


On 9/9/14, 7:51 PM, Jay Pipes wrote:

On 09/09/2014 06:57 PM, Kevin Benton wrote:

Hi Jay,

The main component that won't work without direct integration is
enforcing policy on calls directly to Neutron and calls between the
plugins inside of Neutron. However, that's only one component of GBP.
All of the declarative abstractions, rendering of policy, etc can be
experimented with here in the stackforge project until the incubator is
figured out.


OK, thanks for the explanation Kevin, that helps!
I'll add that there is likely to be a close coupling between ML2 
mechanism drivers and corresponding GBP policy drivers for some of the 
back-end integrations. These will likely share local state such as 
connections to controllers, and may interact with each other as part of 
processing core and GBP API requests. Development, review, and packaging 
of these would be facilitated by having them on the same branch in the 
same repo, but its probably not absolutely necessary.


-Bob


Best,
-jay


On Tue, Sep 9, 2014 at 12:01 PM, Jay Pipes mailto:jaypi...@gmail.com>> wrote:

On 09/04/2014 12:07 AM, Sumit Naiksatam wrote:

Hi,

There's been a lot of lively discussion on GBP a few weeks back
and we
wanted to drive forward the discussion on this a bit more. As 
you
might imagine, we're excited to move this forward so more 
people can

try it out.  Here are the options:

* Neutron feature branch: This presumably allows the GBP feature
to be
developed independently, and will perhaps help in faster 
iterations.
There does seem to be a significant packaging issue [1] with 
this

approach that hasn’t been completely addressed.

* Neutron-incubator: This allows a path to graduate into
Neutron, and
will be managed by the Neutron core team. That said, the 
proposal is

under discussion and there are still some open questions [2].

* Stackforge: This allows the GBP team to make rapid and 
iterative

progress, while still leveraging the OpenStack infra. It also
provides
option of immediately exposing the existing implementation to 
early

adopters.

Each of the above options does not preclude moving to the other
at a later time.

Which option do people think is more preferable?

(We could also discuss this in the weekly GBP IRC meeting on
Thursday:
https://wiki.openstack.org/__wiki/Meetings/Neutron_Group___Policy 
)


Thanks!

[1]
http://lists.openstack.org/__pipermail/openstack-dev/2014-__August/044283.html

[2]
http://lists.openstack.org/__pipermail/openstack-dev/2014-__August/043577.html



Hi all,

IIRC, Kevin was saying to me in IRC that GBP really needs to live
in-tree due to it needing access to various internal plugin points
and to be able to call across different plugin layers/drivers inside
of Neutron.

If this is the case, how would the stackforge GBP project work if it
wasn't a fork of Neutron in its entirety?

Just curious,
-jay


_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org

http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev 






--
Kevin Benton


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][policy] Group-based Policy next steps

2014-09-11 Thread Robert Kukura


On 9/10/14, 6:54 PM, Kevin Benton wrote:
Being in the incubator won't help with this if it's a different repo 
as well.

Agreed.

Given the requirement for GBP to intercept API requests, the potential 
couplings between policy drivers, ML2 mechanism drivers, and even 
service plugins (L3 router), and the fact Neutron doesn't have a stable 
[service] plugin API, along with the goal to eventually merge GBP into 
Neutron, I'd rank the options as follows in descending order:


1) Merge the GBP patches to the neutron repo early in Kilo and iterate, 
just like we had planned for Juno;-) .


2) Like 1, but with the code initially in a "preview" subtree to clarify 
its level of stability and support, and to facilitate packaging it as an 
optional component.


3) Like 1, but merge to a feature branch in the neutron repo and iterate 
there.


4) Develop in an official neutron-incubator repo, with neutron core 
reviews of each GBP patch.


5) Develop in StackForge, without neutron core reviews.


Here's how I see these options in terms of the various considerations 
that have come up during this discussion:


* Options 1, 2 and 3 most easily support whatever coupling is needed 
with the rest of Neutron. Options 4 and 5 would sometimes require 
synchronized changes across repos since dependencies aren't in terms of 
stable interfaces.


* Options 1, 2 and 3 provide a clear path to eventually graduate GBP 
into a fully supported Neutron feature, without loss of git history. 
Option 4 would have some hope of eventually merging into the neutron 
repo due to the code having already had core reviews. With option 5, 
reviewing and merging a complete GBP implementation from StackForge into 
the neutron repo would be a huge effort, with significant risk that 
reviewers would want design changes not practical to make at that stage.


* Options 1 and 2 take full advantage of existing review, CI, packaging 
and release processes and mechanisms. All the other options require 
extra work to put these in place.


* Options 1 and 2 can easily make GBP consumable by early adopters 
through normal channels such as devstack and OpenStack distributions. 
The other options all require the operator or the packager to pull GBP 
code from a different source than the base Neutron code.


* Option 1 relies on the historical understanding that new Neutron 
extension APIs are not initially considered stable, and incompatible 
changes can occur in future releases. Options 2, 3 and 4 make this 
explicit. Option 5 really has nothing to do with Neutron.


* Option 5 allows rapid iteration by the GBP team, without waiting for 
core review. This is essential during experimentation and prototyping, 
but at least some participants consider the GBP implementation to be 
well beyond that phase.


* Options 3, 4, and 5 potentially decouple the GBP release schedule from 
the Neutron release schedule. With options 1 or 2, GBP snapshots would 
be included in all normal Neutron releases. With any of the options, the 
GBP team, vendors, or distributions would be able to back-port arbitrary 
snapshots of GBP to a branch off the stable/juno branch (in the neutron 
repo itself or in a clone) to allow early adopters to use GBP with 
Juno-based OpenStack distributions.



Does the above make some sense? What have I missed?

Of course this all assumes there is consensus that we should proceed 
with GBP, that we should continue by iterating the currently proposed 
design and code, and that GBP should eventually become part of Neutron. 
These assumptions may still be the real issues:-( . If we can't agree on 
whether GBP is in an experimentation/rapid-prototyping phase vs. an 
almost-ready-to-beta-test phase, I don't see how can we get consensus on 
the next steps for its development.


-Bob


On Wed, Sep 10, 2014 at 7:22 AM, Robert Kukura 
mailto:kuk...@noironetworks.com>> wrote:



On 9/9/14, 7:51 PM, Jay Pipes wrote:

On 09/09/2014 06:57 PM, Kevin Benton wrote:

Hi Jay,

The main component that won't work without direct
integration is
enforcing policy on calls directly to Neutron and calls
between the
plugins inside of Neutron. However, that's only one
component of GBP.
All of the declarative abstractions, rendering of policy,
etc can be
experimented with here in the stackforge project until the
incubator is
figured out.


OK, thanks for the explanation Kevin, that helps!

I'll add that there is likely to be a close coupling between ML2
mechanism drivers and corresponding GBP policy drivers for some of
the back-end integrations. These will likely share local state
such as connections to controllers, and may interact with each
other as part of processing core and GBP API requests.
Development, review, and

Re: [openstack-dev] [Neutron] Find the compute host on which a VM runs

2013-11-21 Thread Robert Kukura
On 11/21/2013 04:20 AM, Stefan Apostoaie wrote:
> Hello again,
> 
> I studied the portbindings extension (the quantum.db.portbindings_db and
> quantum.extensions.portbindings modules). However it's unclear for me
> who sets the portbindings.HOST_ID attribute. I ran some tests with OVS:
> called quantum port-create command and
> the OVSQuantumPluginV2.create_port method got called and it had
> 'binding:host_id': >. If I print out
> the port object I have 'binding:host_id': None. 
> 
> What other plugins are doing:
> 1. extend the quantum.db.portbindings_db.PortBindingMixin class
> 2. call the _process_portbindings_create_and_update method in
> create/update port

Take look at how the ML2 plugin handles port binding and uses
binding:host_id with its set of registered MechanismDrivers. It does not
use the mixin class because the values of binding:vif_type and other
portbinding attributes vary depending on what MechanismDriver binds the
port.

In fact, you may want to consider implementing an ML2 MechanismDriver
rather than a entire new monolithic plugin - it will save you a lot of
work, initially and in the longer term!

> What I cannot find is where the portbindings.HOST_ID attribute is being set.

Its set by nova, either on port creation, or as an update to an existing
port. See allocate_for_instance() and
_populate_neutron_extension_values() in nova/network/neutronv2/api.py.

-Bob

> 
> Regards,
> Stefan
> 
> 
> On Fri, Nov 15, 2013 at 10:57 PM, Mark McClain
> mailto:mark.mccl...@dreamhost.com>> wrote:
> 
> Stefan-
> 
> Your workflow is very similar to many other plugins.  You’ll want to
> look at implementing the port binding extension in your plugin.  The
> port binding extension allows Nova to inform Neutron of the host
> where the VM is running.
> 
> mark
> 
> On Nov 15, 2013, at 9:55 AM, Stefan Apostoaie  > wrote:
> 
> > Hello,
> >
> > I'm creating a Neutron/Quantum plugin to work with a networking
> controller that takes care of the configuration of the virtual
> networks. Basically what we are doing is receive the API calls and
> forward them to our controller to run the required configuration on
> the compute hosts.
> > What I need to know when a create_port call is made to my plugin
> is on which compute host the VM is created (so that our controller
> will run the configuration on that host). Is there a way to find out
> this information from the plugin?
> >
> > Regards,
> > Stefan Apostoaie
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> 
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] L3 agent external networks

2013-12-03 Thread Robert Kukura
On 12/03/2013 04:23 AM, Sylvain Afchain wrote:
> Hi,
> 
> I was reviewing this patch (https://review.openstack.org/#/c/52884/) from 
> Oleg and I thought that is a bit tricky to deploy an l3 agent with automation 
> tools like Puppet since you have to specify the uuid of a network that 
> doesn't already exist. It may be better to bind a l3 agent to an network by a 
> CIDR instead of a uuid since when we deploy we know in advance which network 
> address will be on which l3 agent.
> 
> I wanted also remove the L3 agent limit regarding to the number of external 
> networks, I submitted a patch as WIP 
> (https://review.openstack.org/#/c/59359/) for that purpose and I wanted to 
> have the community opinion about that :)

I really like this idea - there is no need to limit an agent to a single
external network unless the external bridge is being used. See my
comments on the patch.

-Bob

> 
> Please let me know what you think.
> 
> Best regards,
> 
> Sylvain
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ml2 and vxlan configurations, neutron-server fails to start

2013-12-10 Thread Robert Kukura
On 11/28/2013 07:01 AM, Gopi Krishna B wrote:
> 
> Hi
> I am configuring Havana on fedora 19. Observing the below errors in case
> of neutron. 
> Please help me resolve this issue.
>  copied only few lines from the server.log, in case full log is
> required, let me know.

You may have resolved this by now, or given up, but a few comments (in
addition to the sqlalchemy version issues others have addressed).

> 
> /etc/neutron/plugins/ml2/ml2_conf.ini
> type_drivers = vxlan,local
> tenant_network_types = vxlan
> mechanism_drivers = neutron.plugins.ml2.drivers.OpenvswitchMechanismDriver

The type_drivers and mechanism_drivers lists are entry point names
rather than class names, so the above line should be:

mechanism_drivers = openvswitch

> network_vlan_ranges = physnet1:1000:2999

You don't need to set the above unless you've enabled the vlan type
driver, but it won't hurt anything. If used, it needs to be in the
[ml2_type_vlan] section.

> vni_ranges = 5000:6000
> vxlan_group = 239.10.10.1

The two lines above need to be in the [ml2_type_vxlan] section.

-Bob

> 
> 
> ERROR neutron.common.legacy [-] Skipping unknown group key: firewall_driver
> 
> ERROR stevedore.extension [-] Could not load 'local': (SQLAlchemy 0.8.3
> (/usr/lib64/python2.7/site-packages),
> Requirement.parse('SQLAlchemy>=0.7.8,<=0.7.99'))
>  ERROR stevedore.extension [-] (SQLAlchemy 0.8.3
> (/usr/lib64/python2.7/site-packages),
> Requirement.parse('SQLAlchemy>=0.7.8,<=0.7.99'))
> 
> ERROR stevedore.extension [-] Could not load 'vxlan': (SQLAlchemy 0.8.3
> (/usr/lib64/python2.7/site-packages),
> Requirement.parse('SQLAlchemy>=0.7.8,<=0.7.99'))
> ERROR stevedore.extension [-] (SQLAlchemy 0.8.3
> (/usr/lib64/python2.7/site-packages),
> Requirement.parse('SQLAlchemy>=0.7.8,<=0.7.99'))
> TRACE stevedore.extension VersionConflict: (SQLAlchemy 0.8.3
> (/usr/lib64/python2.7/site-packages),
> Requirement.parse('SQLAlchemy>=0.7.8,<=0.7.99'))
> 
> ERROR neutron.common.config [-] Unable to load neutron from
> configuration file /etc/neutron/api-paste.ini.
> TRACE neutron.common.config LookupError: No section 'quantum' (prefixed
> by 'app' or 'application' or 'composite' or 'composit' or 'pipeline' or
> 'filter-app') found in config /etc/neutron/api-paste.ini
> 
> 
>  ERROR neutron.service [-] In serve_wsgi()
> TRACE neutron.service RuntimeError: Unable to load quantum from
> configuration file /etc/neutron/api-paste.ini.
> 
> Regards
> Gopi Krishna
> 
> 
> 
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] ML2 improvement, more extensible and more modular

2013-12-10 Thread Robert Kukura
On 12/04/2013 05:37 AM, Zang MingJie wrote:
> Hi, all:
> 
> I have already written a patch[1] which makes ml2 segment more
> extensible, where segments can contains more fields other than physical
> network and segmentation id, but there is still a lot of work to do to
> make the ml2 more extensible. Here are some of my thoughts.

Hi Zang,

Thanks for putting this together - hopefully we can cover this at
tomorrow's ML2 meeting.

> 
> First, introduce a new extension to abandon provider extension.

Doesn't the existing multiprovidernet extension already serve this
purpose? If not, I'd like to understand why a different extension is
needed, or how the multiprovidernet extension API needs to change.

Whether/how/when we abandon the old provider extension seems separable
from whether the multiprovidernet extension is sufficient or needs
replacement/modification. Assuming we do merge your patch, my current
thinking is that we should declare the provider extension deprecated for
ML2 in icehouse, but maintain the ability to use the provider extension
when there is only a single segment and the segment's type can be fully
described with the old provider attributes. We could then remove the
provider extension from ML2 in the J release.

> Currently the provider extension only support physical network and
> segmentation id, as a result the net-create and net-show calls can't
> handle any extra fields. Because we already have multi-segment support,
> we may need an extension which extends the network with only one field,
> segments; json can be used to describe segments when accessing the API
> (net-create/show). 

The multiprovidernet extension already does add a "segments" attribute
to network. This attribute is a list of dictionaries, one per segment,
with no constraints on the keys used in the dictionaries. My
understanding of the main objective of your current patch (which I
apologize that I still haven't completed reviewing) is that it allows
type drivers to use arbitrary keys in these dictionaries. Is that correct?

>  But there comes a new problem, type drivers must
> check the access policy of fields inside segment very carefully, there
> is nowhere to ensure the access permission other than the type driver.

I'm not sure what you mean by "access policy" here. Is this read-only
vs. read/write? Clearly if type drivers can define arbitrary keys, only
they can check access or validate supplied values. Are you suggesting
adding some mechanism to let the API framework know about each type
driver's keys and handle this for them?

> multiprovidernet extension is a good start point, but some modification
> still required.

I'm still not entirely clear on what modifications to the
multiprovidernet API you feel are required.

> 
> Second, add segment calls to mechanism driver. There is an one-to-many
> relation between network and segments, but it is not clear and hide
> inside multi-segment implementation, it should be more clear and more
> extensible, so people can use it wisely. I want to add some new APIs to
> mechanism manager which handles segment relate operations, eg,
> segment_create/segment_release, and separate segment operations from
> network.

Maybe this is the extension API modification you were referring to above
(or maybe not). When I first proposed ML2 and then wrote the blueprint
for multi-segment networks in ML2, I envisioned segments as being
exposed in the API as a new type of REST resource, possibly a
sub-resource of network. But we ended up going with the much simpler
multiprovidernet approach that Aaron Rosen came up with for the NVP
plugin instead.

Regardless of whether we stick with the existing multiprovidernet
extension or switch to an API with segment as a 1st class resource, I
think your idea of adding explicit operations to MechanismManager (and
MechanismDriver I presume) that are called when adding and removing
segments is worth considering. The original thinking was that
MechanismDrivers would see these changes via the network_update
operations. An alternative might be to simply add a
previous_network_segments attribute to NetworkContext so a
MechanismDriver can easily compare it with network_segments in
network_update calls to see what's changed. But your approach may be
more clear and easier to use.

> 
> Last, as our l2 agent (ovs-agent) only handles l2 segments operations,
> and do nothing with networks or subnets, I wonder if we can remove all
> network related code inside agent implementation, and only handle
> segments, change lvm map from {network_id->segment/ports} to
> {segment_id->segment/ports}. The goal is to make the ovs-agent pure l2
> agent.

I like this idea as well. It might be more realistic to consider after
the monolithic openvswitch and linuxbridge plugins are removed from the
tree in the J cycle, and we only need to worry about keeping the agents
compatible with the ML2 plugin. Or maybe it could be done in icehouse
and compatibility maintai

Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Robert Kukura
On 01/09/2014 02:34 PM, Nachi Ueno wrote:
> Hi Doug
> 
> 2014/1/9 Doug Hellmann :
>>
>>
>>
>> On Thu, Jan 9, 2014 at 1:53 PM, Nachi Ueno  wrote:
>>>
>>> Hi folks
>>>
>>> Thank you for your input.
>>>
>>> The key difference from external configuration system (Chef, puppet
>>> etc) is integration with
>>> openstack services.
>>> There are cases a process should know the config value in the other hosts.
>>> If we could have centralized config storage api, we can solve this issue.
>>>
>>> One example of such case is neuron + nova vif parameter configuration
>>> regarding to security group.
>>> The workflow is something like this.
>>>
>>> nova asks vif configuration information for neutron server.
>>> Neutron server ask configuration in neutron l2-agent on the same host
>>> of nova-compute.
>>
>>
>> That extra round trip does sound like a potential performance bottleneck,
>> but sharing the configuration data directly is not the right solution. If
>> the configuration setting names are shared, they become part of the
>> integration API between the two services. Nova should ask neutron how to
>> connect the VIF, and it shouldn't care how neutron decides to answer that
>> question. The configuration setting is an implementation detail of neutron
>> that shouldn't be exposed directly to nova.
> 
> I agree for nova - neutron if.
> However, neutron server and neutron l2 agent configuration depends on
> each other.
> 
>> Running a configuration service also introduces what could be a single point
>> of failure for all of the other distributed services in OpenStack. An
>> out-of-band tool like chef or puppet doesn't result in the same sort of
>> situation, because the tool does not have to be online in order for the
>> cloud to be online.
> 
> We can choose same implementation. ( Copy information in local cache etc)
> 
> Thank you for your input, I could organize my thought.
> My proposal can be split for the two bps.
> 
> [BP1] conf api for the other process
> Provide standard way to know the config value in the other process in
> same host or the other host.
> 
> - API Example:
> conf.host('host1').firewall_driver
> 
> - Conf file baed implementaion:
> config for each host will be placed in here.
>  /etc/project/conf.d/{hostname}/agent.conf
> 
> [BP2] Multiple backend for storing config files
> 
> Currently, we have only file based configration.
> In this bp, we are extending support for config storage.
> - KVS
> - SQL
> - Chef - Ohai

I'm not opposed to making oslo.config support pluggable back ends, but I
don't think BP2 could be depended upon to satisfy a requirement for a
global view of arbitrary config information, since this wouldn't be
available if a file-based backend were selected.

As far as the neutron server getting info it needs about running L2
agents, this is currently done via the agents_db RPC, where each agent
periodically sends certain info to the server and the server stores it
in the DB for subsequent use. The same mechanism is also used for L3 and
DHCP agents, and probably for *aaS agents. Some agent config information
is included, as well as some stats, etc.. This mechanism does the job,
but could be generalized and improved a bit. But I think this flow of
information is really for specialized purposes - only a small subset of
the config info is passed, and other info is passed that doesn't come
from config.

My only real concern with using this current mechanism is that some of
the information (stats and liveness) is very dynamic, while other
information (config) is relatively static. Its a bit wasteful to send
all of it every couple seconds, but at least liveness (heartbeat) info
does need to be sent frequently. BP1 sounds like it could address the
static part, but I'm still not sure config file info is the only
relatively static info that might need to be shared. I think neutron can
stick with its agents_db RPC, DB, and API extension for now, and improve
it as needed.

-Bob

> 
> Best
> Nachi
> 
>> Doug
>>
>>
>>>
>>>
>>> host1
>>>   neutron server
>>>   nova-api
>>>
>>> host2
>>>   neturon l2-agent
>>>   nova-compute
>>>
>>> In this case, a process should know the config value in the other hosts.
>>>
>>> Replying some questions
>>>
 Adding a config server would dramatically change the way that
>>> configuration management tools would interface with OpenStack services.
>>> [Jay]
>>>
>>> Since this bp is just adding "new mode", we can still use existing config
>>> files.
>>>
 why not help to make Chef or Puppet better and cover the more OpenStack
 use-cases rather than add yet another competing system [Doug, Morgan]
>>>
>>> I believe this system is not competing system.
>>> The key point is we should have some standard api to access such services.
>>> As Oleg suggested, we can use sql-server, kv-store, or chef or puppet
>>> as a backend system.
>>>
>>> Best
>>> Nachi
>>>
>>>
>>> 2014/1/9 Morgan Fainberg :
 I agree with Doug’s question, but also would extend the train of though

Re: [openstack-dev] [Neturon] firewall_driver and ML2 and vif_security discussion

2014-01-16 Thread Robert Kukura
On 01/16/2014 04:43 AM, Mathieu Rohon wrote:
> Hi,
> 
> your proposals make sense. Having the firewall driver configuring so
> much things looks pretty stange.

Agreed. I fully support proposed fix 1, adding enable_security_group
config, at least for ml2. I'm not sure whether making this sort of
change go the openvswitch or linuxbridge plugins at this stage is needed.


> Enabling security group should be a plugin/MD decision, not a driver decision.

I'm not so sure I support proposed fix 2, removing firewall_driver
configuration. I think with proposed fix 1, firewall_driver becomes an
agent-only configuration variable, which seems fine to me, at least for
now. The people working on ovs-firewall-driver need something like this
to choose the between their new driver and the iptables driver. Each L2
agent could obviously revisit this later if needed.

> 
> For ML2, in a first implementation, having vif security based on
> vif_type looks good too.

I'm not convinced to support proposed fix 3, basing ml2's vif_security
on the value of vif_type. It seems to me that if vif_type was all that
determines how nova handles security groups, there would be no need for
either the old capabilities or new vif_security port attribute.

I think each ML2 bound MechanismDriver should be able to supply whatever
vif_security (or capabilities) value it needs. It should be free to
determine that however it wants. It could be made configurable on the
server-side as Mathieu suggest below, or could be kept configurable in
the L2 agent and transmitted via agents_db RPC to the MechanismDriver in
the server as I have previously suggested.

As an initial step, until we really have multiple firewall drivers to
choose from, I think we can just hardwire each agent-based
MechanismDriver to return the correct vif_security value for its normal
firewall driver, as we currently do for the capabilities attribute.

Also note that I really like the extend_port_dict() MechanismDriver
methods in Nachi's current patch set. This is a much nicer way for the
bound MechanismDriver to return binding-specific attributes than what
ml2 currently does for vif_type and capabilities. I'm working on a patch
taking that part of Nachi's code, fixing a few things, and extending it
to handle the vif_type attribute as well as the current capabilities
attribute. I'm hoping to post at least a WIP version of this today.

I do support hardwiring the other plugins to return specific
vif_security values, but those values may need to depend on the value of
enable_security_group from proposal 1.

-Bob

> Once OVSfirewallDriver will be available, the firewall drivers that
> the operator wants to use should be in a MD config file/section and
> ovs MD could bind one of the firewall driver during
> port_create/update/get.
> 
> Best,
> Mathieu
> 
> On Wed, Jan 15, 2014 at 6:29 PM, Nachi Ueno  wrote:
>> Hi folks
>>
>> Security group for OVS agent (ovs plugin or ML2) is being broken.
>> so we need vif_security port binding to fix this
>> (https://review.openstack.org/#/c/21946/)
>>
>> We got discussed about the architecture for ML2 on ML2 weekly meetings, and
>> I wanna continue discussion in here.
>>
>> Here is my proposal for how to fix it.
>>
>> https://docs.google.com/presentation/d/1ktF7NOFY_0cBAhfqE4XjxVG9yyl88RU_w9JcNiOukzI/edit#slide=id.p
>>
>> Best
>> Nachi
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neturon] firewall_driver and ML2 and vif_security discussion

2014-01-16 Thread Robert Kukura
4) Configure the firewall_driver or security_group_mode for each MD in
the server. This would mean some new RPC is needed to for the agent to
fetch the fthis from the server at startup. This could be problematic if
the server isn't running when the L2 agent starts.

>>
> I agree with your thinking here Nachi. Leaving this as a global
> configuration makes the most sense.
> 
>>
>>> Thanks,
>>>
>>> Amir
>>>
>>>
>>> On Jan 16, 2014, at 11:42 AM, Nachi Ueno  wrote:
>>>
>>>> Hi Mathieu, Bob
>>>>
>>>> Thank you for your reply
>>>> OK let's do (A) - (C) for now.
>>>>
>>>> (A) Remove firewall_driver from server side
>>>>Remove Noop <-- I'll write patch for this

This gets replaced with the enable_security_groups server config, right?

>>>>
>>>> (B) update ML2 with extend_port_dict <-- Bob will push new review for this
>>>>
>>>> (C) Fix vif_security patch using (1) and (2). <-- I'll update the
>>>> patch after (A) and (B) merged
>>>># config is hardwired for each mech drivers for now

I completely agree with doing A, B, and C now. My understanding is that
this is equivalent to my option 2 above.

>>>>
>>>> (Optional D) Rething firewall_driver config in the agent

See above for my current view on that. But a decision on D can be
deferred for now, at least until we have a choice of firewall drivers.

-Bob


>>>>
>>>>
>>>>
>>>>
>>>>
>>>> 2014/1/16 Robert Kukura :
>>>>> On 01/16/2014 04:43 AM, Mathieu Rohon wrote:
>>>>>> Hi,
>>>>>>
>>>>>> your proposals make sense. Having the firewall driver configuring so
>>>>>> much things looks pretty stange.
>>>>>
>>>>> Agreed. I fully support proposed fix 1, adding enable_security_group
>>>>> config, at least for ml2. I'm not sure whether making this sort of
>>>>> change go the openvswitch or linuxbridge plugins at this stage is needed.
>>>>>
>>>>>
>>>>>> Enabling security group should be a plugin/MD decision, not a driver 
>>>>>> decision.
>>>>>
>>>>> I'm not so sure I support proposed fix 2, removing firewall_driver
>>>>> configuration. I think with proposed fix 1, firewall_driver becomes an
>>>>> agent-only configuration variable, which seems fine to me, at least for
>>>>> now. The people working on ovs-firewall-driver need something like this
>>>>> to choose the between their new driver and the iptables driver. Each L2
>>>>> agent could obviously revisit this later if needed.
>>>>>
>>>>>>
>>>>>> For ML2, in a first implementation, having vif security based on
>>>>>> vif_type looks good too.
>>>>>
>>>>> I'm not convinced to support proposed fix 3, basing ml2's vif_security
>>>>> on the value of vif_type. It seems to me that if vif_type was all that
>>>>> determines how nova handles security groups, there would be no need for
>>>>> either the old capabilities or new vif_security port attribute.
>>>>>
>>>>> I think each ML2 bound MechanismDriver should be able to supply whatever
>>>>> vif_security (or capabilities) value it needs. It should be free to
>>>>> determine that however it wants. It could be made configurable on the
>>>>> server-side as Mathieu suggest below, or could be kept configurable in
>>>>> the L2 agent and transmitted via agents_db RPC to the MechanismDriver in
>>>>> the server as I have previously suggested.
>>>>>
>>>>> As an initial step, until we really have multiple firewall drivers to
>>>>> choose from, I think we can just hardwire each agent-based
>>>>> MechanismDriver to return the correct vif_security value for its normal
>>>>> firewall driver, as we currently do for the capabilities attribute.
>>>>>
>>>>> Also note that I really like the extend_port_dict() MechanismDriver
>>>>> methods in Nachi's current patch set. This is a much nicer way for the
>>>>> bound MechanismDriver to return binding-specific attributes than what
>>>>> ml2 currently does for vif_type and capabilities. I'm working on a patch
>>>>> taking that part of Nachi's code, f

[openstack-dev] [nova][neutron][ml2] Proposal to support VIF security, PCI-passthru/SR-IOV, and other binding-specific data

2014-01-29 Thread Robert Kukura
The neutron patch [1] and nova patch [2], proposed to resolve the
"get_firewall_required should use VIF parameter from neutron" bug [3],
replace the binding:capabilities attribute in the neutron portbindings
extension with a new binding:vif_security attribute that is a dictionary
with several keys defined to control VIF security. When using the ML2
plugin, this binding:vif_security attribute flows from the bound
MechanismDriver to nova's GenericVIFDriver.

Separately, work on PCI-passthru/SR-IOV for ML2 also requires
binding-specific information to flow from the bound MechanismDriver to
nova's GenericVIFDriver. See [4] for links to various documents and BPs
on this.

A while back, in reviewing [1], I suggested a general mechanism to allow
ML2 MechanismDrivers to supply arbitrary port attributes in order to
meet both the above requirements. That approach was incorporated into
[1] and has been cleaned up and generalized a bit in [5].

I'm now becoming convinced that proliferating new port attributes for
various data passed from the neutron plugin (the bound MechanismDriver
in the case of ML2) to nova's GenericVIFDriver is not such a great idea.
One issue is that adding attributes keeps changing the API, but this
isn't really a user-facing API. Another is that all ports should have
the same set of attributes, so the plugin still has to be able to supply
those attributes when a bound MechanismDriver does not supply them. See [5].

Instead, I'm proposing here that the binding:vif_security attribute
proposed in [1] and [2] be renamed binding:vif_details, and used to
transport whatever data needs to flow from the neutron plugin (i.e.
ML2's bound MechanismDriver) to the nova GenericVIFDriver. This same
dictionary attribute would be able to carry the VIF security key/value
pairs defined in [1], those needed for [4], as well as any needed for
future GenericVIFDriver features. The set of key/value pairs in
binding:vif_details that apply would depend on the value of
binding:vif_type.

If this proposal is agreed to, I can quickly write a neutron BP covering
this and provide a generic implementation for ML2. Then [1] and [2]
could be updated to use binding:vif_details for the VIF security data
and eliminate the existing binding:capabilities attribute.

If we take this proposed approach of using binding:vif_details, the
internal ML2 handling of binding:vif_type and binding:vif_details could
either take the approach used for binding:vif_type and
binding:capabilities in the current code, where the values are stored in
the port binding DB table. Or they could take the approach in [5] where
they are obtained from bound MechanismDriver when needed. Comments on
these options are welcome.

Please provide feedback on this proposal and the various options in this
email thread and/or at today's ML2 sub-team meeting.

Thanks,

-Bob

[1] https://review.openstack.org/#/c/21946/
[2] https://review.openstack.org/#/c/44596/
[3] https://bugs.launchpad.net/nova/+bug/1112912
[4] https://wiki.openstack.org/wiki/Meetings/Passthrough
[5] https://review.openstack.org/#/c/69783/


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

2014-01-29 Thread Robert Kukura
On 01/29/2014 09:46 AM, Robert Li (baoli) wrote:

> Another issue that came up during the meeting is about whether or not
> vnic-type should be part of the top level binding or part of
> binding:profile. In other words, should it be defined as
> binding:vnic-type or binding:profile:vnic-type.   

I'd phrase that choice as "top-level attribute" vs. "key/value pair
within the binding:profile attribute". If we go with a new top-level
attribute, it may or may not end up being part of the portbindings
extension.

Although I've been advocating making vnic_type a key within
binding:profile (minimizing effort), it just occurred to me that
policy.json contains:

"create_port:binding:profile": "rule:admin_only",
"get_port:binding:profile": "rule:admin_only",
"update_port:binding:profile": "rule:admin_only",

This means that only administrative users (including nova's integration
with neutron) can read or write the binding:profile attribute by default.

But my (limited) understanding of the PCI-passthru use cases is that
normal users need to specify vnic_type because this is what determines
the NIC type that their VMs see for the port. If that is correct, then I
think this tips the balance towards vnic_type being a new top-level
attribute to which normal users have read/write access. Comments?

If I'm mistaken on the above, please ignore the rest of this email...

If vnic_type is a new top-level attribute accessible to normal users,
then I'm not sure it belongs in the portbindings extension. First,
everything else in that extension is only visible to administrative
users. Second, from the normal user's point of view, vnic_type has to do
with the type of NIC they want within their VM, not with how the port is
bound outside their VM to some underlying network segment and networking
mechanism they aren't even aware of. So we need a new extension for
vnic_type, which has the advantage of not requiring any change to
existing plugins that don't support that extension.

If vnic_type is a new top-level attribute in a new API extension, it
deserves its own neutron BP covering defining the extension and
implementing it in ML2. This is probably an update of Irena's
https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type.
Implementations for other plugins could follow via separate BPs as they
choose to implement the extension.

If anything else we've been planning to put in binding:profile needs
normal user access, it could be defined in this new extension instead.
For now, I'm assuming other input data for PCI-passthru (such as the
slot info from nova) is only accessible to administrators and will go in
binding:profile. I'll submit a separate BP for generically implementing
the binding:profile attribute in ML2, as we've discussed.

This leaves us with potentially 3 separate generic neutron/Ml2 BPs
providing the infrastructure for PCI-passthru:

1) Irena's
https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type
2) My BP to implement binding:profile in ML2
3) Definition/implementation of binding:vif_details based on Nachi's
binding:vif_security patch, for which I could submit a BP.

-Bob


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

2014-01-29 Thread Robert Kukura
On 01/29/2014 05:44 PM, Robert Li (baoli) wrote:
> Hi Bob,
> 
> that's a good find. profileid as part of IEEE 802.1br needs to be in
> binding:profile, and can be specified by a normal user, and later possibly
> the pci_flavor. Would it be wrong to say something as in below in the
> policy.json?
>  "create_port:binding:vnic_type": "rule:admin_or_network_owner"
>  "create_port:binding:profile:profileid": "rule:admin_or_network_owner"

Maybe, but a normal user that owns a network has no visibility into the
underlying details (such as the providernet extension attributes.

It seems to me that profileid is something that only make sense to an
administrator of the underlying cloud environment. Where would a normal
cloud user get a value to use for this?

Also, would a normal cloud user really know what pci_flavor to use?
Isn't all this kind of detail hidden from a normal user within the nova
VM flavor (or host aggregate or whatever) pre-configured by the admin?

-Bob

> 
> If it's not appropriate, then I agree with you we may need another
> extension. 
> 
> 
> --Robert
> 
> On 1/29/14 4:57 PM, "Robert Kukura"  wrote:
> 
>> On 01/29/2014 09:46 AM, Robert Li (baoli) wrote:
>>
>>> Another issue that came up during the meeting is about whether or not
>>> vnic-type should be part of the top level binding or part of
>>> binding:profile. In other words, should it be defined as
>>> binding:vnic-type or binding:profile:vnic-type.
>>
>> I'd phrase that choice as "top-level attribute" vs. "key/value pair
>> within the binding:profile attribute". If we go with a new top-level
>> attribute, it may or may not end up being part of the portbindings
>> extension.
>>
>> Although I've been advocating making vnic_type a key within
>> binding:profile (minimizing effort), it just occurred to me that
>> policy.json contains:
>>
>>"create_port:binding:profile": "rule:admin_only",
>>"get_port:binding:profile": "rule:admin_only",
>>"update_port:binding:profile": "rule:admin_only",
>>
>> This means that only administrative users (including nova's integration
>> with neutron) can read or write the binding:profile attribute by default.
>>
>> But my (limited) understanding of the PCI-passthru use cases is that
>> normal users need to specify vnic_type because this is what determines
>> the NIC type that their VMs see for the port. If that is correct, then I
>> think this tips the balance towards vnic_type being a new top-level
>> attribute to which normal users have read/write access. Comments?
>>
>> If I'm mistaken on the above, please ignore the rest of this email...
>>
>> If vnic_type is a new top-level attribute accessible to normal users,
>> then I'm not sure it belongs in the portbindings extension. First,
>> everything else in that extension is only visible to administrative
>> users. Second, from the normal user's point of view, vnic_type has to do
>> with the type of NIC they want within their VM, not with how the port is
>> bound outside their VM to some underlying network segment and networking
>> mechanism they aren't even aware of. So we need a new extension for
>> vnic_type, which has the advantage of not requiring any change to
>> existing plugins that don't support that extension.
>>
>> If vnic_type is a new top-level attribute in a new API extension, it
>> deserves its own neutron BP covering defining the extension and
>> implementing it in ML2. This is probably an update of Irena's
>> https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type.
>> Implementations for other plugins could follow via separate BPs as they
>> choose to implement the extension.
>>
>> If anything else we've been planning to put in binding:profile needs
>> normal user access, it could be defined in this new extension instead.
>> For now, I'm assuming other input data for PCI-passthru (such as the
>> slot info from nova) is only accessible to administrators and will go in
>> binding:profile. I'll submit a separate BP for generically implementing
>> the binding:profile attribute in ML2, as we've discussed.
>>
>> This leaves us with potentially 3 separate generic neutron/Ml2 BPs
>> providing the infrastructure for PCI-passthru:
>>
>> 1) Irena's
>> https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type
>> 2) My BP to implement binding:profile in ML2
>> 3) Definition/implementation of binding:vif_details based on Nachi's
>> binding:vif_security patch, for which I could submit a BP.
>>
>> -Bob
>>
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

2014-01-30 Thread Robert Kukura
On 01/30/2014 01:42 AM, Irena Berezovsky wrote:
> Please see inline
> 
>  
> 
> *From:*Ian Wells [mailto:ijw.ubu...@cack.org.uk]
> *Sent:* Thursday, January 30, 2014 1:17 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on
> Jan. 29th
> 
>  
> 
> On 29 January 2014 23:50, Robert Kukura  <mailto:rkuk...@redhat.com>> wrote:
> 
> On 01/29/2014 05:44 PM, Robert Li (baoli) wrote:
> > Hi Bob,
> >
> > that's a good find. profileid as part of IEEE 802.1br needs to be in
> > binding:profile, and can be specified by a normal user, and later
> possibly
> > the pci_flavor. Would it be wrong to say something as in below in the
> > policy.json?
> >  "create_port:binding:vnic_type": "rule:admin_or_network_owner"
> >  "create_port:binding:profile:profileid":
> "rule:admin_or_network_owner"
> 
> Maybe, but a normal user that owns a network has no visibility into the
> underlying details (such as the providernet extension attributes.
> 
>  
> 
> I'm with Bob on this, I think - I would expect that vnic_type is passed
> in by the user (user readable, and writeable, at least if the port is
> not attached) and then may need to be reflected back, if present, in the
> 'binding' attribute via the port binding extension (unless Nova can just
> go look for it - I'm not clear on what's possible here).
> 
> */[IrenaB] I would prefer not to add new extension for vnic_type. I
> think it fits well into port binding extension, and it may be reasonable
> to follow the policy rules as Robert suggested. The way user specifies
> the vnic_type via nova API is currently left out for short term. Based
> on previous PCI meeting discussions, it was raised by John that regular
> user may be required to set vNIC flavor, but he definitely not expected
> to manage ‘driver’ level details of the way to connect vNIC./*
> 
> */For me it looks like neutron port can handle vnic_type via port
> binding, and the question is whether it is standalone attribute on port
> binding or a key,val pair on port binding:profile./*

I do not think we should try to associate different access policies with
different keys within the binding:profile attribute (or any other
dictionary attribute). We could consider changing the policy for
binding:profile itself, but I'm not in favor of that because I strongly
feel normal cloud users should not be exposed to any of these internal
details of the deployment. If vnic_type does need to be accessed by
normal users, I believe it should be a top-level attribute or a
key/value pair within a user-accessible top-level attribute.

-Bob

> 
> 
>  
> 
> Also, would a normal cloud user really know what pci_flavor to use?
> Isn't all this kind of detail hidden from a normal user within the nova
> VM flavor (or host aggregate or whatever) pre-configured by the admin?
> 
>  
> 
> Flavors are user-visible, analogous to Nova's machine flavors, they're
> just not user editable.  I'm not sure where port profiles come from.
> -- 
> 
> Ian.
> 
>  
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th

2014-01-31 Thread Robert Kukura
On 01/30/2014 03:44 PM, Robert Li (baoli) wrote:
> Hi,
> 
> We made a lot of progress today. We agreed that:
> -- vnic_type will be a top level attribute as binding:vnic_type
> -- BPs:
>  * Irena's
> https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type for
> binding:vnic_type
>  * Bob to submit a BP for binding:profile in ML2. SRIOV input info
> will be encapsulated in binding:profile

This is https://blueprints.launchpad.net/neutron/+spec/ml2-binding-profile.

>  * Bob to submit a BP for binding:vif_details in ML2. SRIOV output
> info will be encapsulated in binding:vif_details, which may include
> other information like security parameters. For SRIOV, vlan_id and
> profileid are candidates.

This is https://blueprints.launchpad.net/neutron/+spec/vif-details.

> -- new arguments for port-create will be implicit arguments. Future
> release may make them explicit. New argument: --binding:vnic_type
> {virtio, direct, macvtap}. 
> I think that currently we can make do without the profileid as an input
> parameter from the user. The mechanism driver will return a profileid in
> the vif output.

By "vif output" here, do you mean binding:vif_details? If so, do we know
how the MD gets the value to return?

> 
> Please correct any misstatement in above.

Sounds right to me.

> 
> Issues: 
>   -- do we need a common utils/driver for SRIOV generic parts to be used
> by individual Mechanism drivers that support SRIOV? More details on what
> would be included in this sriov utils/driver? I'm thinking that a
> candidate would be the helper functions to interpret the pci_slot, which
> is proposed as a string. Anything else in your mind? 

I'd suggest looking at the
neutron.plugins.ml2.drivers.mech_agent.AgentMechanismDriverBase class
that is inherited by the various MDs that use L2 agents. This handles
most of what the MDs need to do, and the derived classes only deal with
details specific to that L2 agent. Maybe a similar
SriovMechanismDriverBase class would make sense.

> 
>   -- what should mechanism drivers put in binding:vif_details and how
> nova would use this information? as far as I see it from the code, a VIF
> object is created and populated based on information provided by neutron
> (from get network and get port)

I think nova should include the entire binding:vif_details attribute in
its VIF object so that the GenericVIFDriver can interpret whatever
key/value pairs are needed (based on the binding:vif_type). We are going
to need to work closely with the nova team to make this so.

> 
> Questions:
>   -- nova needs to work with both ML2 and non-ML2 plugins. For regular
> plugins, binding:vnic_type will not be set, I guess. Then would it be
> treated as a virtio type? And if a non-ML2 plugin wants to support
> SRIOV, would it need to  implement vnic-type, binding:profile,
> binding:vif-details for SRIOV itself?

Makes sense to me.

> 
>  -- is a neutron agent making decision based on the binding:vif_type?
>  In that case, it makes sense for binding:vnic_type not to be exposed to
> agents.

I'm not sure I understand what an L2 agent would do with this. As I've
mentioned, I think ML2 will eventually allow the bound MD to add
whatever info it needs to the response returned for the
get_device_details RPC. If the vnic_type is needed in an SRIOV-specific
L2 agent, that should allow the associated driver to supply it.

> 
> Thanks,
> Robert

-Bob



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][ml2] Proposal to support VIF security, PCI-passthru/SR-IOV, and other binding-specific data

2014-01-31 Thread Robert Kukura
On 01/29/2014 10:26 AM, Robert Kukura wrote:
> The neutron patch [1] and nova patch [2], proposed to resolve the
> "get_firewall_required should use VIF parameter from neutron" bug [3],
> replace the binding:capabilities attribute in the neutron portbindings
> extension with a new binding:vif_security attribute that is a dictionary
> with several keys defined to control VIF security. When using the ML2
> plugin, this binding:vif_security attribute flows from the bound
> MechanismDriver to nova's GenericVIFDriver.
> 
> Separately, work on PCI-passthru/SR-IOV for ML2 also requires
> binding-specific information to flow from the bound MechanismDriver to
> nova's GenericVIFDriver. See [4] for links to various documents and BPs
> on this.
> 
> A while back, in reviewing [1], I suggested a general mechanism to allow
> ML2 MechanismDrivers to supply arbitrary port attributes in order to
> meet both the above requirements. That approach was incorporated into
> [1] and has been cleaned up and generalized a bit in [5].
> 
> I'm now becoming convinced that proliferating new port attributes for
> various data passed from the neutron plugin (the bound MechanismDriver
> in the case of ML2) to nova's GenericVIFDriver is not such a great idea.
> One issue is that adding attributes keeps changing the API, but this
> isn't really a user-facing API. Another is that all ports should have
> the same set of attributes, so the plugin still has to be able to supply
> those attributes when a bound MechanismDriver does not supply them. See [5].
> 
> Instead, I'm proposing here that the binding:vif_security attribute
> proposed in [1] and [2] be renamed binding:vif_details, and used to
> transport whatever data needs to flow from the neutron plugin (i.e.
> ML2's bound MechanismDriver) to the nova GenericVIFDriver. This same
> dictionary attribute would be able to carry the VIF security key/value
> pairs defined in [1], those needed for [4], as well as any needed for
> future GenericVIFDriver features. The set of key/value pairs in
> binding:vif_details that apply would depend on the value of
> binding:vif_type.

I've filed a blueprint for this:

 https://blueprints.launchpad.net/neutron/+spec/vif-details

Also, for a similar flow of binding-related information into the
plugin/MechanismDriver, I've filed a blueprint to implement the existing
binding:profile attribute in ML2:

 https://blueprints.launchpad.net/neutron/+spec/ml2-binding-profile

Both of these are admin-only dictionary attributes on port. One is
read-only for output data, the other read-write for input data. Together
they enable optional features like SR-IOV PCI passthrough to be
implemented in ML2 MechanismDrivers without requiring feature-specific
changes to the plugin itself.

-Bob

> 
> If this proposal is agreed to, I can quickly write a neutron BP covering
> this and provide a generic implementation for ML2. Then [1] and [2]
> could be updated to use binding:vif_details for the VIF security data
> and eliminate the existing binding:capabilities attribute.
> 
> If we take this proposed approach of using binding:vif_details, the
> internal ML2 handling of binding:vif_type and binding:vif_details could
> either take the approach used for binding:vif_type and
> binding:capabilities in the current code, where the values are stored in
> the port binding DB table. Or they could take the approach in [5] where
> they are obtained from bound MechanismDriver when needed. Comments on
> these options are welcome.
> 
> Please provide feedback on this proposal and the various options in this
> email thread and/or at today's ML2 sub-team meeting.
> 
> Thanks,
> 
> -Bob
> 
> [1] https://review.openstack.org/#/c/21946/
> [2] https://review.openstack.org/#/c/44596/
> [3] https://bugs.launchpad.net/nova/+bug/1112912
> [4] https://wiki.openstack.org/wiki/Meetings/Passthrough
> [5] https://review.openstack.org/#/c/69783/
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th

2014-01-31 Thread Robert Kukura
On 01/31/2014 11:45 AM, Sandhya Dasu (sadasu) wrote:
> Hi Irena,
>   I was initially looking
> at 
> https://blueprints.launchpad.net/neutron/+spec/ml2-typedriver-extra-port-info 
> to
> take care of the extra information required to set up the SR-IOV port.
> When the scope of the BP was being decided, we had very little info
> about our own design so I didn't give any feedback about SR-IOV ports.
> But, I feel that this is the direction we should be going. Maybe we
> should target this in Juno.

That BP covers including additional information from the bound network
segment's TypeDriver in the response to the get_device_details RPC. I
believe the bound MechanismDriver should also have the opportunity to
include additional information in that response. Possibly the bound
MechanismDriver is what would decide what information from the bound
segment's TypeDriver is needed by the L2 agent it supports. Anyway, I'm
still hopeful we can get this sorted out and implemented in icehouse,
but I agree its best to not depend on it until juno.

> 
> Introducing, SRIOVPortProfileMixin would be creating yet another way to
> take care of extra port config. Let me know what you think.

This SRIOVPortProfileMixin has been mentioned a few times now. I'm not
clear on what this class is intended to be mixed into. Is this something
that would be mixed into any plugin that supports SRIOV?

If so, I'd prefer not to use such a mixin class in ML2, where we've so
far been avoiding the need to add any specific support for SRIOV to the
plugin itself. Instead we've been trying to define generic features in
ML2 that allow SRIOV to be packaged as an optional feature enabled by
configuring a MechanismDriver that supports it. This approach is a prime
example of the "modular" goal of "Modular Layer 2".

-Bob

> 
> Thanks,
> Sandhya
> 
> From: Irena Berezovsky mailto:ire...@mellanox.com>>
> Date: Thursday, January 30, 2014 4:13 PM
> To: "Robert Li (baoli)" mailto:ba...@cisco.com>>,
> Robert Kukura mailto:rkuk...@redhat.com>>, Sandhya
> Dasu mailto:sad...@cisco.com>>, "OpenStack
> Development Mailing List (not for usage questions)"
>  <mailto:openstack-dev@lists.openstack.org>>, "Brian Bowen (brbowen)"
> mailto:brbo...@cisco.com>>
> Subject: RE: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on
> Jan. 30th
> 
> Robert,
> 
> Thank you very much for the summary.
> 
> Please, see inline
> 
>  
> 
> *From:*Robert Li (baoli) [mailto:ba...@cisco.com]
> *Sent:* Thursday, January 30, 2014 10:45 PM
> *To:* Robert Kukura; Sandhya Dasu (sadasu); Irena Berezovsky; OpenStack
> Development Mailing List (not for usage questions); Brian Bowen (brbowen)
> *Subject:* [openstack-dev] [nova][neutron] PCI pass-through SRIOV on
> Jan. 30th
> 
>  
> 
> Hi,
> 
>  
> 
> We made a lot of progress today. We agreed that:
> 
> -- vnic_type will be a top level attribute as binding:vnic_type
> 
> -- BPs:
> 
>  * Irena's
> https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type for
> binding:vnic_type
> 
>  * Bob to submit a BP for binding:profile in ML2. SRIOV input info
> will be encapsulated in binding:profile
> 
>  * Bob to submit a BP for binding:vif_details in ML2. SRIOV output
> info will be encapsulated in binding:vif_details, which may include
> other information like security parameters. For SRIOV, vlan_id and
> profileid are candidates.
> 
> -- new arguments for port-create will be implicit arguments. Future
> release may make them explicit. New argument: --binding:vnic_type
> {virtio, direct, macvtap}. 
> 
> I think that currently we can make do without the profileid as an input
> parameter from the user. The mechanism driver will return a profileid in
> the vif output.
> 
>  
> 
> Please correct any misstatement in above.
> 
>  
> 
> Issues: 
> 
>   -- do we need a common utils/driver for SRIOV generic parts to be used
> by individual Mechanism drivers that support SRIOV? More details on what
> would be included in this sriov utils/driver? I'm thinking that a
> candidate would be the helper functions to interpret the pci_slot, which
> is proposed as a string. Anything else in your mind? 
> 
> */[IrenaB] I thought on some SRIOVPortProfileMixin to handle and persist
> SRIOV port related attributes/*
> 
>  
> 
>   -- what should mechanism drivers put in binding:vif_details and how
> nova would use this information? as far as I see it from the code, a VIF
> object is created and populated based on information provided by neutron
> (from get network and get port)
> 
>  
> 
> Questions:
> 

Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV binding of ports

2014-02-04 Thread Robert Kukura
On 02/04/2014 04:35 PM, Sandhya Dasu (sadasu) wrote:
> Hi,
>  I have a couple of questions for ML2 experts regarding support of
> SR-IOV ports.

I'll try, but I think these questions might be more about how the
various SR-IOV implementations will work than about ML2 itself...

> 1. The SR-IOV ports would not be managed by ova or linuxbridge L2
> agents. So, how does a MD for SR-IOV ports bind/unbind its ports to the
> host? Will it just be a db update?

I think whether or not to use an L2 agent depends on the specific SR-IOV
implementation. Some (Mellanox?) might use an L2 agent, while others
(Cisco?) might put information in binding:vif_details that lets the nova
VIF driver take care of setting up the port without an L2 agent.

> 
> 2. Also, how do we handle the functionality in mech_agent.py, within the
> SR-IOV context? 

My guess is that those SR-IOV MechanismDrivers that use an L2 agent
would inherit the AgentMechanismDriverBase class if it provides useful
functionality, but any MechanismDriver implementation is free to not use
this base class if its not applicable. I'm not sure if an
SriovMechanismDriverBase (or SriovMechanismDriverMixin) class is being
planned, and how that would relate to AgentMechanismDriverBase.

-Bob

> 
> Thanks,
> Sandhya
> 
> From: Sandhya Dasu mailto:sad...@cisco.com>>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>  <mailto:openstack-dev@lists.openstack.org>>
> Date: Monday, February 3, 2014 3:14 PM
> To: "OpenStack Development Mailing List (not for usage questions)"
>  <mailto:openstack-dev@lists.openstack.org>>, Irena Berezovsky
> mailto:ire...@mellanox.com>>, "Robert Li (baoli)"
> mailto:ba...@cisco.com>>, Robert Kukura
> mailto:rkuk...@redhat.com>>, "Brian Bowen
> (brbowen)" mailto:brbo...@cisco.com>>
> Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV
> extra hr of discussion today
> 
> Hi,
> Since, openstack-meeting-alt seems to be in use, baoli and myself
> are moving to openstack-meeting. Hopefully, Bob Kukura & Irena can join
> soon.
> 
> Thanks,
> Sandhya
> 
> From: Sandhya Dasu mailto:sad...@cisco.com>>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>  <mailto:openstack-dev@lists.openstack.org>>
> Date: Monday, February 3, 2014 1:26 PM
> To: Irena Berezovsky mailto:ire...@mellanox.com>>,
> "Robert Li (baoli)" mailto:ba...@cisco.com>>, Robert
> Kukura mailto:rkuk...@redhat.com>>, "OpenStack
> Development Mailing List (not for usage questions)"
>  <mailto:openstack-dev@lists.openstack.org>>, "Brian Bowen (brbowen)"
> mailto:brbo...@cisco.com>>
> Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV
> extra hr of discussion today
> 
> Hi all,
>     Both openstack-meeting and openstack-meeting-alt are available
> today. Lets meet at UTC 2000 @ openstack-meeting-alt.
> 
> Thanks,
> Sandhya
> 
> From: Irena Berezovsky mailto:ire...@mellanox.com>>
> Date: Monday, February 3, 2014 12:52 AM
> To: Sandhya Dasu mailto:sad...@cisco.com>>, "Robert
> Li (baoli)" mailto:ba...@cisco.com>>, Robert Kukura
> mailto:rkuk...@redhat.com>>, "OpenStack Development
> Mailing List (not for usage questions)"
>  <mailto:openstack-dev@lists.openstack.org>>, "Brian Bowen (brbowen)"
> mailto:brbo...@cisco.com>>
> Subject: RE: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on
> Jan. 30th
> 
> Hi Sandhya,
> 
> Can you please elaborate how do you suggest to extend the below bp for
> SRIOV Ports managed by different Mechanism Driver?
> 
> I am not biased to any specific direction here, just think we need
> common layer for managing SRIOV port at neutron, since there is a common
> pass between nova and neutron.
> 
>  
> 
> BR,
> 
> Irena
> 
>  
> 
>  
> 
> *From:*Sandhya Dasu (sadasu) [mailto:sad...@cisco.com]
> *Sent:* Friday, January 31, 2014 6:46 PM
> *To:* Irena Berezovsky; Robert Li (baoli); Robert Kukura; OpenStack
> Development Mailing List (not for usage questions); Brian Bowen (brbowen)
> *Subject:* Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on
> Jan. 30th
> 
>  
> 
> Hi Irena,
> 
>   I was initially looking
> at 
> https://blueprints.launchpad.net/neutron/+spec/ml2-typedriver-extra-port-info 
> to
> take care of the extra information required to set up the SR-IOV port.
> When the scope of the BP was being decided, we had very little info
> about our own design so I didn't give any feedback about SR-

[openstack-dev] [neutron][ml2] Port binding information, transactions, and concurrency

2014-02-04 Thread Robert Kukura
A couple of interrelated issues with the ML2 plugin's port binding have
been discussed over the past several months in the weekly ML2 meetings.
These effect drivers being implemented for icehouse, and therefore need
to be addressed in icehouse:

* MechanismDrivers need detailed information about all binding changes,
including unbinding on port deletion
(https://bugs.launchpad.net/neutron/+bug/1276395)
* MechanismDrivers' bind_port() methods are currently called inside
transactions, but in some cases need to make remote calls to controllers
or devices (https://bugs.launchpad.net/neutron/+bug/1276391)
* Semantics of concurrent port binding need to be defined if binding is
moved outside the triggering transaction.

I've taken the action of writing up a unified proposal for resolving
these issues, which follows...

1) An original_bound_segment property will be added to PortContext. When
the MechanismDriver update_port_precommit() and update_port_postcommit()
methods are called and a binding previously existed (whether its being
torn down or not), this property will provide access to the network
segment used by the old binding. In these same cases, the portbinding
extension attributes (such as binding:vif_type) for the old binding will
be available via the PortContext.original property. It may be helpful to
also add bound_driver and original_bound_driver properties to
PortContext that behave similarly to bound_segment and
original_bound_segment.

2) The MechanismDriver.bind_port() method will no longer be called from
within a transaction. This will allow drivers to make remote calls on
controllers or devices from within this method without holding a DB
transaction open during those calls. Drivers can manage their own
transactions within bind_port() if needed, but need to be aware that
these are independent from the transaction that triggered binding, and
concurrent changes to the port could be occurring.

3) Binding will only occur after the transaction that triggers it has
been completely processed and committed. That initial transaction will
unbind the port if necessary. Four cases for the initial transaction are
possible:

3a) In a port create operation, whether the binding:host_id is supplied
or not, all drivers' port_create_precommit() methods will be called, the
initial transaction will be committed, and all drivers'
port_create_postcommit() methods will be called. The drivers will see
this as creation of a new unbound port, with PortContext properties as
shown. If a value for binding:host_id was supplied, binding will occur
afterwards as described in 4 below.

PortContext.original: None
PortContext.original_bound_segment: None
PortContext.original_bound_driver: None
PortContext.current['binding:host_id']: supplied value or None
PortContext.current['binding:vif_type']: 'unbound'
PortContext.bound_segment: None
PortContext.bound_driver: None

3b) Similarly, in a port update operation on a previously unbound port,
all drivers' port_update_precommit() and port_update_postcommit()
methods will be called, with PortContext properies as shown. If a value
for binding:host_id was supplied, binding will occur afterwards as
described in 4 below.

PortContext.original['binding:host_id']: previous value or None
PortContext.original['binding:vif_type']: 'unbound' or 'binding_failed'
PortContext.original_bound_segment: None
PortContext.original_bound_driver: None
PortContext.current['binding:host_id']: current value or None
PortContext.current['binding:vif_type']: 'unbound'
PortContext.bound_segment: None
PortContext.bound_driver: None

3c) In a port update operation on a previously bound port that does not
trigger unbinding or rebinding, all drivers' update_port_precommit() and
update_port_postcommit() methods will be called with PortContext
properties reflecting unchanged binding states as shown.

PortContext.original['binding:host_id']: previous value
PortContext.original['binding:vif_type']: previous value
PortContext.original_bound_segment: previous value
PortContext.original_bound_driver: previous value
PortContext.current['binding:host_id']: previous value
PortContext.current['binding:vif_type']: previous value
PortContext.bound_segment: previous value
PortContext.bound_driver: previous value

3d) In a the port update operation on a previously bound port that does
trigger unbinding or rebinding, all drivers' update_port_precommit() and
update_port_postcommit() methods will be called with PortContext
properties reflecting the previously bound and currently unbound binding
states as shown. If a value for binding:host_id was supplied, binding
will occur afterwards as described in 4 below.

PortContext.original['binding:host_id']: previous value
PortContext.original['binding:vif_type']: previous value
PortContext.original_bound_segment: previous value
PortContext.original_bound_driver: previous value
PortContext.current['binding:host_id']: new or current value
PortContext.current['binding:vif_type']: 'unbound'
PortCo

Re: [openstack-dev] Agenda for todays ML2 Weekly meeting

2014-02-05 Thread Robert Kukura
On 02/05/2014 06:06 AM, trinath.soman...@freescale.com wrote:
> Hi-
> 
>  
> 
> Kindly share me the agenda for today weekly meeting on Neutron/ML2.

I just updated
https://wiki.openstack.org/wiki/Meetings/ML2#Meeting_February_5.2C_2014.
Mestery has a conflict for today's meeting.

-Bob

> 
>  
> 
>  
> 
> Best Regards,
> 
> --
> 
> Trinath Somanchi - B39208
> 
> trinath.soman...@freescale.com | extn: 4048
> 
>  
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] Port binding information, transactions, and concurrency

2014-02-05 Thread Robert Kukura
On 02/05/2014 09:10 AM, Henry Gessau wrote:
> Bob, this is fantastic, I really appreciate all the detail. A couple of
> questions ...
> 
> On Wed, Feb 05, at 2:16 am, Robert Kukura  wrote:
> 
>> A couple of interrelated issues with the ML2 plugin's port binding have
>> been discussed over the past several months in the weekly ML2 meetings.
>> These effect drivers being implemented for icehouse, and therefore need
>> to be addressed in icehouse:
>>
>> * MechanismDrivers need detailed information about all binding changes,
>> including unbinding on port deletion
>> (https://bugs.launchpad.net/neutron/+bug/1276395)
>> * MechanismDrivers' bind_port() methods are currently called inside
>> transactions, but in some cases need to make remote calls to controllers
>> or devices (https://bugs.launchpad.net/neutron/+bug/1276391)
>> * Semantics of concurrent port binding need to be defined if binding is
>> moved outside the triggering transaction.
>>
>> I've taken the action of writing up a unified proposal for resolving
>> these issues, which follows...
>>
>> 1) An original_bound_segment property will be added to PortContext. When
>> the MechanismDriver update_port_precommit() and update_port_postcommit()
>> methods are called and a binding previously existed (whether its being
>> torn down or not), this property will provide access to the network
>> segment used by the old binding. In these same cases, the portbinding
>> extension attributes (such as binding:vif_type) for the old binding will
>> be available via the PortContext.original property. It may be helpful to
>> also add bound_driver and original_bound_driver properties to
>> PortContext that behave similarly to bound_segment and
>> original_bound_segment.
>>
>> 2) The MechanismDriver.bind_port() method will no longer be called from
>> within a transaction. This will allow drivers to make remote calls on
>> controllers or devices from within this method without holding a DB
>> transaction open during those calls. Drivers can manage their own
>> transactions within bind_port() if needed, but need to be aware that
>> these are independent from the transaction that triggered binding, and
>> concurrent changes to the port could be occurring.
>>
>> 3) Binding will only occur after the transaction that triggers it has
>> been completely processed and committed. That initial transaction will
>> unbind the port if necessary. Four cases for the initial transaction are
>> possible:
>>
>> 3a) In a port create operation, whether the binding:host_id is supplied
>> or not, all drivers' port_create_precommit() methods will be called, the
>> initial transaction will be committed, and all drivers'
>> port_create_postcommit() methods will be called. The drivers will see
>> this as creation of a new unbound port, with PortContext properties as
>> shown. If a value for binding:host_id was supplied, binding will occur
>> afterwards as described in 4 below.
>>
>> PortContext.original: None
>> PortContext.original_bound_segment: None
>> PortContext.original_bound_driver: None
>> PortContext.current['binding:host_id']: supplied value or None
>> PortContext.current['binding:vif_type']: 'unbound'
>> PortContext.bound_segment: None
>> PortContext.bound_driver: None
>>
>> 3b) Similarly, in a port update operation on a previously unbound port,
>> all drivers' port_update_precommit() and port_update_postcommit()
>> methods will be called, with PortContext properies as shown. If a value
>> for binding:host_id was supplied, binding will occur afterwards as
>> described in 4 below.
>>
>> PortContext.original['binding:host_id']: previous value or None
>> PortContext.original['binding:vif_type']: 'unbound' or 'binding_failed'
>> PortContext.original_bound_segment: None
>> PortContext.original_bound_driver: None
>> PortContext.current['binding:host_id']: current value or None
>> PortContext.current['binding:vif_type']: 'unbound'
>> PortContext.bound_segment: None
>> PortContext.bound_driver: None
>>
>> 3c) In a port update operation on a previously bound port that does not
>> trigger unbinding or rebinding, all drivers' update_port_precommit() and
>> update_port_postcommit() methods will be called with PortContext
>> properties reflecting unchanged binding states as shown.
>>
>> PortContext.original['binding:host_id']: previous value
>> PortContext.original['bindin

Re: [openstack-dev] [Neutron] Support for multiple provider networks with same VLAN segmentation id

2014-02-09 Thread Robert Kukura
On 02/09/2014 12:56 PM, Kyle Mestery wrote:
> On Feb 6, 2014, at 5:24 PM, Vinay Bannai  wrote:
> 
>> Hello Folks, 
>>
>> We are running into a situation where we are not able to create multiple 
>> provider networks with the same VLAN id. We would like to propose a solution 
>> to remove this restriction through a configuration option. This approach 
>> would not conflict with the present behavior where it is not possible to 
>> create multiple provider networks with the same VLAN id. 
>>
>> The changes should be minimal and would like to propose it for the next 
>> summit. The use case for this need is documented in the blueprint 
>> specification. 
>> Any feedback or comments are welcome. 
>>
>> https://blueprints.launchpad.net/neutron/+spec/duplicate-providernet-vlans
>>
> Hi Vinay:
> 
> This problem seems straightforward enough, though currently you are right
> in that we don’t allow multiple Neutron networks to have the same segmentation
> ID. I’ve added myself as approver for this BP and look forward to further
> discussions of this before and during the upcoming Summit!

Multiple networks with network_type of 'vlan' are already allowed to
have the same segmentation ID with the ml2, openvswitch, or linuxbridge
plugins - the networks just need to have different physical_network
names. If they have the same network_type, physical_network, and
segmentation_id, they are the same network. What else would distinguish
them from each other?

Could your use case be addressed by simply using different
physical_network names for each rack? This would provide independent
spaces of segmentation_ids for each.

-Bob

> 
> Thanks!
> Kyle
> 
>> Thanks
>> -- 
>> Vinay Bannai
>> Email: vban...@gmail.com
>> Google Voice: 415 938 7576
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][ml2] Proposal to support VIF security, PCI-passthru/SR-IOV, and other binding-specific data

2014-02-10 Thread Robert Kukura
On 01/31/2014 03:47 PM, Robert Kukura wrote:
> On 01/29/2014 10:26 AM, Robert Kukura wrote:
>> The neutron patch [1] and nova patch [2], proposed to resolve the
>> "get_firewall_required should use VIF parameter from neutron" bug [3],
>> replace the binding:capabilities attribute in the neutron portbindings
>> extension with a new binding:vif_security attribute that is a dictionary
>> with several keys defined to control VIF security. When using the ML2
>> plugin, this binding:vif_security attribute flows from the bound
>> MechanismDriver to nova's GenericVIFDriver.
>>
>> Separately, work on PCI-passthru/SR-IOV for ML2 also requires
>> binding-specific information to flow from the bound MechanismDriver to
>> nova's GenericVIFDriver. See [4] for links to various documents and BPs
>> on this.
>>
>> A while back, in reviewing [1], I suggested a general mechanism to allow
>> ML2 MechanismDrivers to supply arbitrary port attributes in order to
>> meet both the above requirements. That approach was incorporated into
>> [1] and has been cleaned up and generalized a bit in [5].
>>
>> I'm now becoming convinced that proliferating new port attributes for
>> various data passed from the neutron plugin (the bound MechanismDriver
>> in the case of ML2) to nova's GenericVIFDriver is not such a great idea.
>> One issue is that adding attributes keeps changing the API, but this
>> isn't really a user-facing API. Another is that all ports should have
>> the same set of attributes, so the plugin still has to be able to supply
>> those attributes when a bound MechanismDriver does not supply them. See [5].
>>
>> Instead, I'm proposing here that the binding:vif_security attribute
>> proposed in [1] and [2] be renamed binding:vif_details, and used to
>> transport whatever data needs to flow from the neutron plugin (i.e.
>> ML2's bound MechanismDriver) to the nova GenericVIFDriver. This same
>> dictionary attribute would be able to carry the VIF security key/value
>> pairs defined in [1], those needed for [4], as well as any needed for
>> future GenericVIFDriver features. The set of key/value pairs in
>> binding:vif_details that apply would depend on the value of
>> binding:vif_type.
> 
> I've filed a blueprint for this:
> 
>  https://blueprints.launchpad.net/neutron/+spec/vif-details

An initial patch implementing the vif-details BP is at
https://review.openstack.org/#/c/72452/. We need to decide if this
approach is acceptable in order to proceed with the SR-IOV and VIF
security implementations.

> 
> Also, for a similar flow of binding-related information into the
> plugin/MechanismDriver, I've filed a blueprint to implement the existing
> binding:profile attribute in ML2:
> 
>  https://blueprints.launchpad.net/neutron/+spec/ml2-binding-profile

I should have a patch implementing the ml2-binding-profile BP tomorrow.

-Bob

> 
> Both of these are admin-only dictionary attributes on port. One is
> read-only for output data, the other read-write for input data. Together
> they enable optional features like SR-IOV PCI passthrough to be
> implemented in ML2 MechanismDrivers without requiring feature-specific
> changes to the plugin itself.
> 
> -Bob
> 
>>
>> If this proposal is agreed to, I can quickly write a neutron BP covering
>> this and provide a generic implementation for ML2. Then [1] and [2]
>> could be updated to use binding:vif_details for the VIF security data
>> and eliminate the existing binding:capabilities attribute.
>>
>> If we take this proposed approach of using binding:vif_details, the
>> internal ML2 handling of binding:vif_type and binding:vif_details could
>> either take the approach used for binding:vif_type and
>> binding:capabilities in the current code, where the values are stored in
>> the port binding DB table. Or they could take the approach in [5] where
>> they are obtained from bound MechanismDriver when needed. Comments on
>> these options are welcome.
>>
>> Please provide feedback on this proposal and the various options in this
>> email thread and/or at today's ML2 sub-team meeting.
>>
>> Thanks,
>>
>> -Bob
>>
>> [1] https://review.openstack.org/#/c/21946/
>> [2] https://review.openstack.org/#/c/44596/
>> [3] https://bugs.launchpad.net/nova/+bug/1112912
>> [4] https://wiki.openstack.org/wiki/Meetings/Passthrough
>> [5] https://review.openstack.org/#/c/69783/
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Nominate Oleg Bondarev for Core

2014-02-11 Thread Robert Kukura
On 02/10/2014 06:28 PM, Mark McClain wrote:
> All-
> 
> I’d like to nominate Oleg Bondarev to become a Neutron core reviewer.  Oleg 
> has been valuable contributor to Neutron by actively reviewing, working on 
> bugs, and contributing code.
> 
> Neutron cores please reply back with +1/0/-1 votes.

+1

> 
> mark
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] Port binding information, transactions, and concurrency

2014-02-19 Thread Robert Kukura
On 02/10/2014 05:46 AM, Mathieu Rohon wrote:
> Hi,
> 
> one other comment inline :

Hi Mathieu, see below:

> 
> On Wed, Feb 5, 2014 at 5:01 PM, Robert Kukura  wrote:
>> On 02/05/2014 09:10 AM, Henry Gessau wrote:
>>> Bob, this is fantastic, I really appreciate all the detail. A couple of
>>> questions ...
>>>
>>> On Wed, Feb 05, at 2:16 am, Robert Kukura  wrote:
>>>
>>>> A couple of interrelated issues with the ML2 plugin's port binding have
>>>> been discussed over the past several months in the weekly ML2 meetings.
>>>> These effect drivers being implemented for icehouse, and therefore need
>>>> to be addressed in icehouse:
>>>>
>>>> * MechanismDrivers need detailed information about all binding changes,
>>>> including unbinding on port deletion
>>>> (https://bugs.launchpad.net/neutron/+bug/1276395)
>>>> * MechanismDrivers' bind_port() methods are currently called inside
>>>> transactions, but in some cases need to make remote calls to controllers
>>>> or devices (https://bugs.launchpad.net/neutron/+bug/1276391)
>>>> * Semantics of concurrent port binding need to be defined if binding is
>>>> moved outside the triggering transaction.
>>>>
>>>> I've taken the action of writing up a unified proposal for resolving
>>>> these issues, which follows...
>>>>
>>>> 1) An original_bound_segment property will be added to PortContext. When
>>>> the MechanismDriver update_port_precommit() and update_port_postcommit()
>>>> methods are called and a binding previously existed (whether its being
>>>> torn down or not), this property will provide access to the network
>>>> segment used by the old binding. In these same cases, the portbinding
>>>> extension attributes (such as binding:vif_type) for the old binding will
>>>> be available via the PortContext.original property. It may be helpful to
>>>> also add bound_driver and original_bound_driver properties to
>>>> PortContext that behave similarly to bound_segment and
>>>> original_bound_segment.
>>>>
>>>> 2) The MechanismDriver.bind_port() method will no longer be called from
>>>> within a transaction. This will allow drivers to make remote calls on
>>>> controllers or devices from within this method without holding a DB
>>>> transaction open during those calls. Drivers can manage their own
>>>> transactions within bind_port() if needed, but need to be aware that
>>>> these are independent from the transaction that triggered binding, and
>>>> concurrent changes to the port could be occurring.
>>>>
>>>> 3) Binding will only occur after the transaction that triggers it has
>>>> been completely processed and committed. That initial transaction will
>>>> unbind the port if necessary. Four cases for the initial transaction are
>>>> possible:
>>>>
>>>> 3a) In a port create operation, whether the binding:host_id is supplied
>>>> or not, all drivers' port_create_precommit() methods will be called, the
>>>> initial transaction will be committed, and all drivers'
>>>> port_create_postcommit() methods will be called. The drivers will see
>>>> this as creation of a new unbound port, with PortContext properties as
>>>> shown. If a value for binding:host_id was supplied, binding will occur
>>>> afterwards as described in 4 below.
>>>>
>>>> PortContext.original: None
>>>> PortContext.original_bound_segment: None
>>>> PortContext.original_bound_driver: None
>>>> PortContext.current['binding:host_id']: supplied value or None
>>>> PortContext.current['binding:vif_type']: 'unbound'
>>>> PortContext.bound_segment: None
>>>> PortContext.bound_driver: None
>>>>
>>>> 3b) Similarly, in a port update operation on a previously unbound port,
>>>> all drivers' port_update_precommit() and port_update_postcommit()
>>>> methods will be called, with PortContext properies as shown. If a value
>>>> for binding:host_id was supplied, binding will occur afterwards as
>>>> described in 4 below.
>>>>
>>>> PortContext.original['binding:host_id']: previous value or None
>>>> PortContext.original['binding:vif_type']: 'unbound' or 'binding_failed'
>>>> PortContext.orig

Re: [openstack-dev] [neutron][ml2] Port binding information, transactions, and concurrency

2014-02-19 Thread Robert Kukura
On 02/05/2014 10:47 AM, Mathieu Rohon wrote:
> Hi,
> 
> thanks for this great proposal

Just following up on the one comment below:

> 
> 
> On Wed, Feb 5, 2014 at 3:10 PM, Henry Gessau  wrote:
>> Bob, this is fantastic, I really appreciate all the detail. A couple of
>> questions ...
>>
>> On Wed, Feb 05, at 2:16 am, Robert Kukura  wrote:
>>
>>> A couple of interrelated issues with the ML2 plugin's port binding have
>>> been discussed over the past several months in the weekly ML2 meetings.
>>> These effect drivers being implemented for icehouse, and therefore need
>>> to be addressed in icehouse:
>>>
>>> * MechanismDrivers need detailed information about all binding changes,
>>> including unbinding on port deletion
>>> (https://bugs.launchpad.net/neutron/+bug/1276395)
>>> * MechanismDrivers' bind_port() methods are currently called inside
>>> transactions, but in some cases need to make remote calls to controllers
>>> or devices (https://bugs.launchpad.net/neutron/+bug/1276391)
>>> * Semantics of concurrent port binding need to be defined if binding is
>>> moved outside the triggering transaction.
>>>
>>> I've taken the action of writing up a unified proposal for resolving
>>> these issues, which follows...
>>>
>>> 1) An original_bound_segment property will be added to PortContext. When
>>> the MechanismDriver update_port_precommit() and update_port_postcommit()
>>> methods are called and a binding previously existed (whether its being
>>> torn down or not), this property will provide access to the network
>>> segment used by the old binding. In these same cases, the portbinding
>>> extension attributes (such as binding:vif_type) for the old binding will
>>> be available via the PortContext.original property. It may be helpful to
>>> also add bound_driver and original_bound_driver properties to
>>> PortContext that behave similarly to bound_segment and
>>> original_bound_segment.
>>>
>>> 2) The MechanismDriver.bind_port() method will no longer be called from
>>> within a transaction. This will allow drivers to make remote calls on
>>> controllers or devices from within this method without holding a DB
>>> transaction open during those calls. Drivers can manage their own
>>> transactions within bind_port() if needed, but need to be aware that
>>> these are independent from the transaction that triggered binding, and
>>> concurrent changes to the port could be occurring.
>>>
>>> 3) Binding will only occur after the transaction that triggers it has
>>> been completely processed and committed. That initial transaction will
>>> unbind the port if necessary. Four cases for the initial transaction are
>>> possible:
>>>
>>> 3a) In a port create operation, whether the binding:host_id is supplied
>>> or not, all drivers' port_create_precommit() methods will be called, the
>>> initial transaction will be committed, and all drivers'
>>> port_create_postcommit() methods will be called. The drivers will see
>>> this as creation of a new unbound port, with PortContext properties as
>>> shown. If a value for binding:host_id was supplied, binding will occur
>>> afterwards as described in 4 below.
>>>
>>> PortContext.original: None
>>> PortContext.original_bound_segment: None
>>> PortContext.original_bound_driver: None
>>> PortContext.current['binding:host_id']: supplied value or None
>>> PortContext.current['binding:vif_type']: 'unbound'
>>> PortContext.bound_segment: None
>>> PortContext.bound_driver: None
>>>
>>> 3b) Similarly, in a port update operation on a previously unbound port,
>>> all drivers' port_update_precommit() and port_update_postcommit()
>>> methods will be called, with PortContext properies as shown. If a value
>>> for binding:host_id was supplied, binding will occur afterwards as
>>> described in 4 below.
>>>
>>> PortContext.original['binding:host_id']: previous value or None
>>> PortContext.original['binding:vif_type']: 'unbound' or 'binding_failed'
>>> PortContext.original_bound_segment: None
>>> PortContext.original_bound_driver: None
>>> PortContext.current['binding:host_id']: current value or None
>>> PortContext.current['binding:vif_type']: 'unbound'
>>> PortContext.bound_segment: None
>>> Por

Re: [openstack-dev] [neutron]The mechanism of physical_network & segmentation_id is logical?

2014-02-24 Thread Robert Kukura
On 02/24/2014 07:09 AM, 黎林果 wrote:
> Hi stackers,
> 
>   When create a network, if we don't set provider:network_type,
> provider:physical_network or provider:segmentation_id, the
> network_type will be from cfg, but the other tow is from db's first
> record. Code is
> 
> (physical_network,
>  segmentation_id) = ovs_db_v2.reserve_vlan(session)
> 
> 
> 
>   There has tow questions.
>   1, network_vlan_ranges = physnet1:100:200
>  Can we config much physical_networks by cfg?

Hi Lee,

You can configure multiple physical_networks. For example:

network_vlan_ranges=physnet1:100:200,physnet1:1000:3000,physnet2:2000:4000,physnet3

This makes ranges of VLAN tags on physnet1 and physnet2 available for
allocation as tenant networks (assuming tenant_network_type = vlan).

This also makes physnet1, physnet2, and physnet3 available for
allocation of VLAN (and flat for OVS) provider networks (with admin
privilege). Note that physnet3 is available for allocation of provider
networks, but not for tenant networks because it does not have a range
of VLANs specified.

> 
>   2, If yes, the physical_network should be uncertainty. Dose this logical?

Each physical_network is considered to be a separate VLAN trunk, so VLAN
2345 on physnet1 is a different isolated network than VLAN 2345 on
physnet2. All the specified (physical_network,segmentation_id) tuples
form a pool of available tenant networks. Normal tenants have no
visibility of which physical_network trunk their networks get allocated on.

-Bob

> 
> 
> Regards!
> 
> Lee Li
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]The mechanism of physical_network & segmentation_id is logical?

2014-02-24 Thread Robert Kukura
On 02/24/2014 09:11 PM, 黎林果 wrote:
> Bob,
> 
> Thank you very much. I have understood.
> 
> Another question:
> When create network with provider, if the network type is VLAN, the
> provider:segmentation_id must be specified.
> 
> In function: def _process_provider_create(self, context, attrs)
> 
> I think it can come from the db too. If get from db failed, then throw
> exception.

I think you are suggesting that if the provider:network_type and
provider:physical_network are specified, but provider:segmentation_id is
not specified, then a value should be allocated from the tenant network
pool. Is that correct?

If so, that sounds similar to
https://blueprints.launchpad.net/neutron/+spec/provider-network-partial-specs,
which is being implemented in the ML2 plugin for icehouse. I would not
expect a similar feature to be implemented for the openvswitch
monolithic plugin, since that is being deprecated.

> 
> what's your opinion?

If I understand it correctly, I agree this feature could be useful.

-Bob

> 
> Thanks!
> 
> 2014-02-24 21:50 GMT+08:00 Robert Kukura :
>> On 02/24/2014 07:09 AM, 黎林果 wrote:
>>> Hi stackers,
>>>
>>>   When create a network, if we don't set provider:network_type,
>>> provider:physical_network or provider:segmentation_id, the
>>> network_type will be from cfg, but the other tow is from db's first
>>> record. Code is
>>>
>>> (physical_network,
>>>  segmentation_id) = ovs_db_v2.reserve_vlan(session)
>>>
>>>
>>>
>>>   There has tow questions.
>>>   1, network_vlan_ranges = physnet1:100:200
>>>  Can we config much physical_networks by cfg?
>>
>> Hi Lee,
>>
>> You can configure multiple physical_networks. For example:
>>
>> network_vlan_ranges=physnet1:100:200,physnet1:1000:3000,physnet2:2000:4000,physnet3
>>
>> This makes ranges of VLAN tags on physnet1 and physnet2 available for
>> allocation as tenant networks (assuming tenant_network_type = vlan).
>>
>> This also makes physnet1, physnet2, and physnet3 available for
>> allocation of VLAN (and flat for OVS) provider networks (with admin
>> privilege). Note that physnet3 is available for allocation of provider
>> networks, but not for tenant networks because it does not have a range
>> of VLANs specified.
>>
>>>
>>>   2, If yes, the physical_network should be uncertainty. Dose this logical?
>>
>> Each physical_network is considered to be a separate VLAN trunk, so VLAN
>> 2345 on physnet1 is a different isolated network than VLAN 2345 on
>> physnet2. All the specified (physical_network,segmentation_id) tuples
>> form a pool of available tenant networks. Normal tenants have no
>> visibility of which physical_network trunk their networks get allocated on.
>>
>> -Bob
>>
>>>
>>>
>>> Regards!
>>>
>>> Lee Li
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Changes to the core team

2014-12-02 Thread Robert Kukura
Assuming I still have a vote, I vote +1 for adding Henry and Kevin, both 
of whom I am confident will do a great job as core reviewer.


I'd ask people to consider voting against dropping me from core (and I 
vote -1 on that if I get a vote). During Juno, my plan was to balance my 
time between neutron core work and implementing GBP as part of neutron. 
Unfortunately, that did not go as planned, and the Juno GBP work has 
continued outside neutron, requiring most of my attention, and leaving 
little time for neutron work. I've felt that it would be irresponsible 
to do drive-by reviews in neutron while this is the case, so had pretty 
much stopped doing neutron reviews until the point when I could devote 
enough attention to follow through on the patches that I review.  But 
I've continued to participate in other ways, including co-leading the 
ML2 sub team. The Juno GBP work is just about complete, and I have 
agreement from my management to make neutron work my top priority for 
the remainder of Kilo. So, as core or not, I expect to be ramping my 
neutron reviewing back up very quickly, and plan to be in SLC next week 
for the mid cycle meetup. If you agree that my contributions as a core 
reviewer have been worthwhile over the long term, and trust that I will 
do as I say and make core reviews my top priority for the remainder of 
Kilo, I ask that you vote -1 on dropping me. If I am not dropped and my 
stats don't improve significantly in the next 30 days, I'll happily 
resign from core.


Regarding dropping Nachi, I will pass, as I've not been paying enough 
attention to the reviews to judge his recent level of contribution.


-Bob

On 12/2/14 10:59 AM, Kyle Mestery wrote:

Now that we're in the thick of working hard on Kilo deliverables, I'd
like to make some changes to the neutron core team. Reviews are the
most important part of being a core reviewer, so we need to ensure
cores are doing reviews. The stats for the 180 day period [1] indicate
some changes are needed for cores who are no longer reviewing.

First of all, I'm proposing we remove Bob Kukura and Nachi Ueno from
neutron-core. Bob and Nachi have been core members for a while now.
They have contributed to Neutron over the years in reviews, code and
leading sub-teams. I'd like to thank them for all that they have done
over the years. I'd also like to propose that should they start
reviewing more going forward the core team looks to fast track them
back into neutron-core. But for now, their review stats place them
below the rest of the team for 180 days.

As part of the changes, I'd also like to propose two new members to
neutron-core: Henry Gessau and Kevin Benton. Both Henry and Kevin have
been very active in reviews, meetings, and code for a while now. Henry
lead the DB team which fixed Neutron DB migrations during Juno. Kevin
has been actively working across all of Neutron, he's done some great
work on security fixes and stability fixes in particular. Their
comments in reviews are insightful and they have helped to onboard new
reviewers and taken the time to work with people on their patches.

Existing neutron cores, please vote +1/-1 for the addition of Henry
and Kevin to the core team.

Thanks!
Kyle

[1] http://stackalytics.com/report/contribution/neutron-group/180

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ML2 port binding?

2015-01-21 Thread Robert Kukura

Hi Harish,

Port binding in ML2 is the process by which a mechanism driver (or once
https://blueprints.launchpad.net/openstack/?searchtext=ml2-hierarchical-port-binding 
is merged, potentially a set of mechanism drivers) is selected for the 
port, determining how connectivity is provided for that port. Since ML2 
is designed to support heterogeneous deployments, its possible for 
different ports to be bound using different mechanism drivers.


The end results of port binding visible outside ML2 are the values of 
the binding:vif_type and binding:vif_details port attributes that 
control the Nova VIF driver behavior.  The inputs to the port binding 
process are the port and the network to which the port belongs, 
including the network's set of segments, as well as the values of the 
binding:host_id, binding:vnic_type, and binding:profile port attributes. 
Nova (or any L3, DHCP, or service agent owning the port) sets 
binding:host_id to indicate the host on which the port is being bound. 
The setting of this attribute triggers the port binding process.


During port binding, the bind_port() method is called by ML2 on each 
registered mechanism driver until one driver indicates it has succeeded 
by calling PortContext.set_binding(). The driver calls 
PortContext.set_binding()  with the identity of the network segment it 
bound, and the values for the binding:vif_type and binding:vif_details 
attributes. Typical mechanism drivers for L2 agents decide whether they 
can bind the port by looking through the list of network segment for one 
with a network_type value that the agent on the host identified by 
binding:host_id can handle, and if relevant, a physical_network value 
for which that agent has connectivity. The current L2 agent mechanism 
drivers use agents_db info sent from the agents to the service via RPC, 
including the agent's health and the bridge_mappings or 
interface_mappings value that describes its connectivity to 
physical_networks.


The doc strings in neutron.plugins.ml2.driver_api provide more detail on 
port binding and the classes and methods involved.


Hope this helps,

-Bob

On 1/21/15 9:42 PM, Harish Patil wrote:

Hello,
I’m a newbie here. Can someone please explain to me as to what exactly is
involved in ‘port_binding’ in ML2 mechanism driver and any specific
pointers? Is it just an association with its corresponding L2 agent? What
is mechanism driver expected to return back to port_bind?

Thanks

Harish






This message and any attached documents contain information from the sending 
company or its parent company(s), subsidiaries, divisions or branch offices 
that may be confidential. If you are not the intended recipient, you may not 
read, copy, distribute, or use this information. If you have received this 
transmission in error, please notify the sender immediately by reply e-mail and 
then delete this message.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][ML2] No ML2 sub-team meeting today (4/30/2014)

2014-04-30 Thread Robert Kukura

Today's ML2 sub-team meeting is cancelled.

-Bob


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Proposed changes to core team

2014-05-22 Thread Robert Kukura


On 5/21/14, 4:59 PM, Kyle Mestery wrote:

Neutron cores, please vote +1/-1 for the proposed addition of Carl
Baldwin to Neutron core.



+1

-Bob


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][group-based-policy] Should we revisit the priority of group-based policy?

2014-05-23 Thread Robert Kukura


On 5/23/14, 12:46 AM, Mandeep Dhami wrote:

Hi Armando:

Those are good points. I will let Bob Kukura chime in on the specifics 
of how we intend to do that integration. But if what you see in the 
prototype/PoC was our final design for integration with Neutron core, 
I would be worried about that too. That specific part of the code 
(events/notifications for DHCP) was done in that way just for the 
prototype - to allow us to experiment with the part that was new and 
needed experimentation, the APIs and the model.


That is the exact reason that we did not initially check the code to 
gerrit - so that we do not confuse the review process with the 
prototype process. But we were requested by other cores to check in 
even the prototype code as WIP patches to allow for review of the API 
parts. That can unfortunately create this very misunderstanding. For 
the review, I would recommend not the WIP patches, as they contain the 
prototype parts as well, but just the final patches that are not 
marked WIP. If you such issues in that part of the code, please DO 
raise that as that would be code that we intend to upstream.


I believe Bob did discuss the specifics of this integration issue with 
you at the summit, but like I said it is best if he represents that 
side himself.

Armando and Mandeep,

Right, we do need a workable solution for the GBP driver to invoke 
neutron API operations, and this came up at the summit.


We started out in the PoC directly calling the plugin, as is currently 
done when creating ports for agents. But this is not sufficient because 
the DHCP notifications, and I think the nova notifications, are needed 
for VM ports. We also really should be generating the other 
notifications, enforcing quotas, etc. for the neutron resources.


We could just use python-neutronclient, but I think we'd prefer to avoid 
the overhead. The neutron project already depends on 
python-neutronclient for some tests, the debug facility, and the 
metaplugin, so in retrospect, we could have easily used it in the PoC.


With the existing REST code, if we could find the 
neutron.api.v2.base.Controller class instance for each resource, we 
could simply call create(), update(), delete(), and show() on these. I 
didn't see an easy way to find these Controller instances, so I threw 
together some code similar to these Controller methods for the PoC. It 
probably wouldn't take too much work to have 
neutron.manager.NeutronManager provide access to the Controller classes 
if we want to go this route.


The core refactoring effort may eventually provide a nice solution, but 
we can't wait for this. It seems we'll need to either use 
python-neutronclient or get access to the Controller classes in the 
meantime.


Any thoughts on these? Any other ideas?

Thanks,

-Bob


Regards,
Mandeep




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Seeking opinions on scope of code refactoring...

2014-05-23 Thread Robert Kukura


On 5/23/14, 3:07 PM, Paul Michali (pcm) wrote:

Thanks for the comments Gary!

Typically, the device driver (backend) and service driver, for a 
provider won't have any database requirements (at least for VPN). For 
the Cisco VPN, the service driver has one additional table that it 
maintains for mapping, but even in that case, there is no modification 
to the built in tables for the VPN plugin.
If these sorts of additional tables might use foreign keys, cascaded 
deletes, etc., it probably make sense to go with the precommit approach 
to make these explicitly part of the transactions.


-Bob


So, the action is validation, persist, apply, with the validation 
possibly having a provider override/extend, the apply always having a 
provider action, and the persistence always being the "core" 
persistence.  It's a question of being validate, persist/commit, 
apply, or pre-commit, commit/persist, post-commit, for naming.


Regards,


PCM (Paul Michali)

MAIL . p...@cisco.com 
IRC ... pcm_ (irc.freenode.com )
TW  @pmichali
GPG Key ... 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83



On May 23, 2014, at 12:40 PM, Gary Duan > wrote:



Hi, Paul,

If the backend driver maintains its own database, I think the 
pre_commit and post_commit approach has an advantage. The typical 
code flow is able to keep the driver and plugin database consistent.


Regarding question 1, where validation methods should be added, I am 
leaning towards A, but I also agree validation hooks can be added 
later when they are needed. It's more important to get provider and 
flavor logic officially supported for services.


Thanks,
Gary



On Fri, May 23, 2014 at 7:24 AM, Paul Michali (pcm) > wrote:


Hi,

I'm working on a task for a BP to separate validation from
persistence logic in L3 services code (VPN currently), so that
providers can override/extend the validation logic (before
persistence).

So I've separated the code for one of the create APIs, placed the
default validation into an ABC class (as a non-abstract method)
that the service drivers inherit from, and modified the plugin to
invoke the validation function in the service driver, before
doing the persistence step.

The flow goes like this...

def create_vpnservice(self, context, vpnservice):
driver = self._get_driver_for_vpnservice(vpnservice)
*driver.validate_create_vpnservice(context, vpnservice)*
super(VPNDriverPlugin, self).create_vpnservice(context,
vpnservice)
driver.apply_create_vpnservice(context, vpnservice)

If the service driver has a validation routine, it'll be invoked,
otherwise, the default method in the ABC for the service driver
will be called and will handle the "baseline" validation. I also
renamed the service driver method that is used for applying the
changes to the device driver as apply_* instead of using the same
name as is used for persistence (e.g. create_vpnservice ->
apply_create_vpnservice).

The questions I have is...

1) Should I create new validation methods A) for every create
(and update?) API (regardless of whether they currently have any
validation logic, B) for resources that have some validation
logic already, or C) only for resources where there are providers
with different validation needs?  I was thinking (B), but would
like to hear peoples' thoughts.

2) I've added validation_* and modified the other service driver
call to apply_*. Should I instead, use the ML2 terminology of pre
commit_* and post commit_*? I personally favor the former, as it
is more descriptive of what is happening in the methods, but I
understand the desire for consistency with other code.

3) Should I create validation methods for code, where defaults
are being set for missing (optional) information? For example,
VPN IKE Policy lifetime being set to units=seconds, value=3600,
if not set. Currently, provider implementations have same
defaults, but *could* potentially use different defaults. The
alternative is to leave this in the persistence code and not
allow it to be changed. This could be deferred, if 1C is chosen
above.

Looking forward to your thoughts...


Thanks!

PCM (Paul Michali)

MAIL . p...@cisco.com 
IRC ... pcm_ (irc.freenode.com )
TW  @pmichali
GPG Key ... 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_

Re: [openstack-dev] [neutron][group-based-policy] GP mapping driver

2014-05-24 Thread Robert Kukura


On 5/23/14, 10:54 PM, Armando M. wrote:

On 23 May 2014 12:31, Robert Kukura  wrote:

On 5/23/14, 12:46 AM, Mandeep Dhami wrote:

Hi Armando:

Those are good points. I will let Bob Kukura chime in on the specifics of
how we intend to do that integration. But if what you see in the
prototype/PoC was our final design for integration with Neutron core, I
would be worried about that too. That specific part of the code
(events/notifications for DHCP) was done in that way just for the prototype
- to allow us to experiment with the part that was new and needed
experimentation, the APIs and the model.

That is the exact reason that we did not initially check the code to gerrit
- so that we do not confuse the review process with the prototype process.
But we were requested by other cores to check in even the prototype code as
WIP patches to allow for review of the API parts. That can unfortunately
create this very misunderstanding. For the review, I would recommend not the
WIP patches, as they contain the prototype parts as well, but just the final
patches that are not marked WIP. If you such issues in that part of the
code, please DO raise that as that would be code that we intend to upstream.

I believe Bob did discuss the specifics of this integration issue with you
at the summit, but like I said it is best if he represents that side
himself.

Armando and Mandeep,

Right, we do need a workable solution for the GBP driver to invoke neutron
API operations, and this came up at the summit.

We started out in the PoC directly calling the plugin, as is currently done
when creating ports for agents. But this is not sufficient because the DHCP
notifications, and I think the nova notifications, are needed for VM ports.
We also really should be generating the other notifications, enforcing
quotas, etc. for the neutron resources.

I am at loss here: if you say that you couldn't fit at the plugin
level, that is because it is the wrong level!! Sitting above it and
redo all the glue code around it to add DHCP notifications etc
continues the bad practice within the Neutron codebase where there is
not a good separation of concerns: for instance everything is cobbled
together like the DB and plugin logic. I appreciate that some design
decisions have been made in the past, but there's no good reason for a
nice new feature like GP to continue this bad practice; this is why I
feel strongly about the current approach being taken.
Armando, I am agreeing with you! The code you saw was a proof-of-concept 
implementation intended as a learning exercise, not something intended 
to be merged as-is to the neutron code base. The approach for invoking 
resources from the driver(s) will be revisited before the driver code is 
submitted for review.



We could just use python-neutronclient, but I think we'd prefer to avoid the
overhead. The neutron project already depends on python-neutronclient for
some tests, the debug facility, and the metaplugin, so in retrospect, we
could have easily used it in the PoC.

I am not sure I understand what overhead you mean here. Could you
clarify? Actually looking at the code, I see a mind boggling set of
interactions going back and forth between the GP plugin, the policy
driver manager, the mapping driver and the core plugin: they are all
entangled together. For instance, when creating an endpoint the GP
plugin ends up calling the mapping driver that in turns ends up calls
the GP plugin itself! If this is not overhead I don't know what is!
The way the code has been structured makes it very difficult to read,
let alone maintain and extend with other policy mappers. The ML2-like
nature of the approach taken might work well in the context of core
plugin, mechanisms drivers etc, but I would argue that it poorly
applies to the context of GP.
The overhead of using python-neutronclient is that unnecessary 
serialization/deserialization are performed as well as socket 
communication through the kernel. This is all required between 
processes, but not within a single process. A well-defined and efficient 
mechanism to invoke resource APIs within the process, with the same 
semantics as incoming REST calls, seems like a generally useful addition 
to neutron. I'm hopeful the core refactoring effort will provide this 
(and am willing to help make sure it does), but we need something we can 
use until that is available.


One lesson we learned from the PoC is that the implicit management of 
the GP resources (RDs and BDs) is completely independent from the 
mapping of GP resources to neutron resources. We discussed this at the 
last GP sub-team IRC meeting, and decided to package this functionality 
as a separate driver that is invoked prior to the mapping_driver, and 
can also be used in conjunction with other GP back-end drivers. I think 
this will help improve the structure and readability of the code, and it 
also shows the applicability of the ML2-like nature of the driver API.


You a

Re: [openstack-dev] [Neutron][ML2]

2014-03-07 Thread Robert Kukura


On 3/7/14, 3:53 AM, Édouard Thuleau wrote:
Yes, that sounds good to be able to load extensions from a mechanism 
driver.


But another problem I think we have with ML2 plugin is the list 
extensions supported by default [1].
The extensions should only load by MD and the ML2 plugin should only 
implement the Neutron core API.


Keep in mind that ML2 supports multiple MDs simultaneously, so no single 
MD can really control what set of extensions are active. Drivers need to 
be able to load private extensions that only pertain to that driver, but 
we also need to be able to share common extensions across subsets of 
drivers. Furthermore, the semantics of the extensions need to be correct 
in the face of multiple co-existing drivers, some of which know about 
the extension, and some of which don't. Getting this properly defined 
and implemented seems like a good goal for juno.


-Bob



Any though ?
Édouard.

[1] 
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L87




On Fri, Mar 7, 2014 at 8:32 AM, Akihiro Motoki > wrote:


Hi,

I think it is better to continue the discussion here. It is a good
log :-)

Eugine and I talked the related topic to allow drivers to load
extensions)  in Icehouse Summit
but I could not have enough time to work on it during Icehouse.
I am still interested in implementing it and will register a
blueprint on it.

etherpad in icehouse summit has baseline thought on how to achieve it.
https://etherpad.openstack.org/p/icehouse-neutron-vendor-extension
I hope it is a good start point of the discussion.

Thanks,
Akihiro

On Fri, Mar 7, 2014 at 4:07 PM, Nader Lahouti
mailto:nader.laho...@gmail.com>> wrote:
> Hi Kyle,
>
> Just wanted to clarify: Should I continue using this mailing
list to post my
> question/concerns about ML2? Please advise.
>
> Thanks,
> Nader.
>
>
>
> On Thu, Mar 6, 2014 at 1:50 PM, Kyle Mestery
mailto:mest...@noironetworks.com>>
> wrote:
>>
>> Thanks Edgar, I think this is the appropriate place to continue
this
>> discussion.
>>
>>
>> On Thu, Mar 6, 2014 at 2:52 PM, Edgar Magana
mailto:emag...@plumgrid.com>> wrote:
>>>
>>> Nader,
>>>
>>> I would encourage you to first discuss the possible extension
with the
>>> ML2 team. Rober and Kyle are leading this effort and they have
a IRC meeting
>>> every week:
>>>
https://wiki.openstack.org/wiki/Meetings#ML2_Network_sub-team_meeting
>>>
>>> Bring your concerns on this meeting and get the right feedback.
>>>
>>> Thanks,
>>>
>>> Edgar
>>>
>>> From: Nader Lahouti mailto:nader.laho...@gmail.com>>
>>> Reply-To: OpenStack List mailto:openstack-dev@lists.openstack.org>>
>>> Date: Thursday, March 6, 2014 12:14 PM
>>> To: OpenStack List mailto:openstack-dev@lists.openstack.org>>
>>> Subject: Re: [openstack-dev] [Neutron][ML2]
>>>
>>> Hi Aaron,
>>>
>>> I appreciate your reply.
>>>
>>> Here is some more details on what I'm trying to do:
>>> I need to add new attribute to the network resource using
extensions
>>> (i.e. network config profile) and use it in the mechanism
driver (in the
>>> create_network_precommit/postcommit).
>>> If I use current implementation of Ml2Plugin, when a call is
made to
>>> mechanism driver's create_network_precommit/postcommit the new
attribute is
>>> not included in the 'mech_context'
>>> Here is code from Ml2Plugin:
>>> class Ml2Plugin(...):
>>> ...
>>>def create_network(self, context, network):
>>> net_data = network['network']
>>> ...
>>> with session.begin(subtransactions=True):
>>> self._ensure_default_security_group(context, tenant_id)
>>> result = super(Ml2Plugin,
self).create_network(context,
>>> network)
>>> network_id = result['id']
>>> ...
>>> mech_context = driver_context.NetworkContext(self,
context,
>>> result)
>>> self.mechanism_manager.create_network_precommit(mech_context)
>>>
>>> Also need to include new extension in the
 _supported_extension_aliases.
>>>
>>> So to avoid changes in the existing code, I was going to
create my own
>>> plugin (which will be very similar to Ml2Plugin) and use it as
core_plugin.
>>>
>>> Please advise the right solution implementing that.
>>>
>>> Regards,
>>> Nader.
>>>
>>>
>>> On Wed, Mar 5, 2014 at 11:49 PM, Aaron Rosen
mailto:aaronoro...@gmail.com>>
>>> wrote:

 Hi Nader,

 Devstack's default plugin is ML2. Usually you wouldn't
'inherit' one
 plugin in another. I'm guessing  you probably wire a driver
that ML2 can use
 though it's hard to tell from the inf

Re: [openstack-dev] [Neutron][ML2]

2014-03-18 Thread Robert Kukura


On 3/18/14, 3:04 PM, racha wrote:

Hi Mathieu,
 Sorry I wasn't following the recent progress on ML2, and I was 
effectively missing the right abstractions of all MDs in my out of 
topic questions.
If I understand correctly, there will be no priority between all MDs 
binding the same port, but an optional "port filter" could also be 
used so that the first responding MD matching the filter will assign 
itself.

Hi Racha,

The bug fix Mathieu referred to below that I am working on will move the 
attempt to bind outside the DB transaction that triggered the 
[re]binding, and thus will involve a separate DB transaction to commit 
the result of the binding. But the basic algorithm for binding ports 
will not be changing as part of this fix. The bind_port() method is 
called sequentially on each mechanism driver in the order they are 
listed in the mechanism_drivers config variable, until one succeeds in 
binding the port, or all have failed to bind the port. Since this will 
now be happening outside a DB transaction, its possible that more than 
one thread could simultaneously try to bind the same port, and this 
concurrency is handled by having all such threads use the result that 
gets committed first.


-Bob



Thanks for your answer.

Best Regards,
Racha



On Mon, Mar 17, 2014 at 3:17 AM, Mathieu Rohon 
mailto:mathieu.ro...@gmail.com>> wrote:


Hi racha,

I don't think your topic has something to deal with Nader's topics.
Please, create another topic, it would be easier to follow.
FYI, robert kukura is currently refactoring the MD binding, please
have a look here : https://bugs.launchpad.net/neutron/+bug/1276391. As
i understand, there won't be priority between MD that can bind a same
port. The first that will respond to the binding request will give its
vif_type.

Best,

Mathieu

On Fri, Mar 14, 2014 at 8:14 PM, racha mailto:ben...@gmail.com>> wrote:
> Hi,
>   Is it possible (in the latest upstream) to partition the same
> integration bridge "br-int" into multiple isolated partitions
(in terms of
> lvids ranges, patch ports, etc.) between OVS mechanism driver
and ODL
> mechanism driver? And then how can we pass some details to
Neutron API (as
> in the provider segmentation type/id/etc) so that ML2 assigns a
mechanism
> driver to the virtual network? The other alternative I guess is
to create
> another integration bridge managed by a different Neutron
instance? Probably
> I am missing something.
>
> Best Regards,
> Racha
>
>
> On Fri, Mar 7, 2014 at 10:33 AM, Nader Lahouti
mailto:nader.laho...@gmail.com>>
> wrote:
>>
>> 1) Does it mean an interim solution is to have our own plugin
(and have
>> all the changes in it) and declare it as core_plugin instead of
Ml2Plugin?
>>
>> 2) The other issue as I mentioned before, is that the
extension(s) is not
>> showing up in the result, for instance when create_network is
called
>> [result = super(Ml2Plugin, self).create_network(context,
network)], and as
>> a result they cannot be used in the mechanism drivers when needed.
>>
>> Looks like the process_extensions is disabled when fix for Bug
1201957
>> committed and here is the change:
>> Any idea why it is disabled?
>>
>> --
>> Avoid performing extra query for fetching port security binding
>>
>> Bug 1201957
>>
>>
>> Add a relationship performing eager load in Port and Network
>>
>> models, thus preventing the 'extend' function from performing
>>
>> an extra database query.
>>
>> Also fixes a comment in securitygroups_db.py
>>
>>
>> Change-Id: If0f0277191884aab4dcb1ee36826df7f7d66a8fa
>>
>>  master   h.1
>>
>> ...
>>
>>  2013.2
>>
>> commit f581b2faf11b49852b0e1d6f2ddd8d19b8b69cdf 1 parent ca421e7
>>
>> Salvatore Orlando salv-orlando authored 8 months ago
>>
>>
>> 2  neutron/db/db_base_plugin_v2.py View
>>
>>  @@ -995,7 +995,7 @@ def create_network(self, context, network):
>>
>> 995   'status': constants.NET_STATUS_ACTIVE}
>>
>> 996   network = models_v2.Network(**args)
>>
>> 997   context.session.add(network)
>>
>> 998 -return self._make_network_dict(network)
>>
>> 998 +return self._make_network_dict(network,
>> process_extensions=False)
>>
   

Re: [openstack-dev] [Neutron] Multiprovider API documentation

2014-04-11 Thread Robert Kukura


On 4/10/14, 6:35 AM, Salvatore Orlando wrote:
The bug for documenting the 'multi-provider' API extension is still 
open [1].
The bug report has a good deal of information, but perhaps it might be 
worth also documenting how ML2 uses the segment information, as this 
might be useful to understand when one should use the 'provider' 
extension and when instead the 'multi-provider' would be a better fit.


Unfortunately I do not understand enough how ML2 handles multi-segment 
networks, so I hope somebody from the ML2 team can chime in.
Here's a quick description of ML2 port binding, including how 
multi-segment networks are handled:


   Port binding is how the ML2 plugin determines the mechanism driver
   that handles the port, the network segment to which the port is
   attached, and the values of the binding:vif_type and
   binding:vif_details port attributes. Its inputs are the
   binding:host_id and binding:profile port attributes, as well as the
   segments of the port's network. When port binding is triggered, each
   registered mechanism driver's bind_port() function is called, in the
   order specified in the mechanism_drivers config variable, until one
   succeeds in binding, or all have been tried. If none succeed, the
   binding:vif_type attribute is set to 'binding_failed'. In
   bind_port(), each mechanism driver checks if it can bind the port on
   the binding:host_id host, using any of the network's segments,
   honoring any requirements it understands in binding:profile. If it
   can bind the port, the mechanism driver calls
   PortContext.set_binding() from within bind_port(), passing the
   chosen segment's ID, the values for binding:vif_type and
   binding:vif_details, and optionally, the port's status. A common
   base class for mechanism drivers supporting L2 agents implements
   bind_port() by iterating over the segments and calling a
   try_to_bind_segment_for_agent() function that decides whether the
   port can be bound based on the agents_db info periodically reported
   via RPC by that specific L2 agent. For network segment types of
   'flat' and 'vlan', the try_to_bind_segment_for_agent() function
   checks whether the L2 agent on the host has a mapping from the
   segment's physical_network value to a bridge or interface. For
   tunnel network segment types, try_to_bind_segment_for_agent() checks
   whether the L2 agent has that tunnel type enabled.


Note that, although ML2 can manage binding to multi-segment networks, 
neutron does not manage bridging between the segments of a multi-segment 
network. This is assumed to be done administratively.


Finally, at least in ML2, the providernet and multiprovidernet 
extensions are two different APIs to supply/view the same underlying 
information. The older providernet extension can only deal with 
single-segment networks, but is easier to use. The newer 
multiprovidernet extension handles multi-segment networks and 
potentially supports an extensible set of a segment properties, but is 
more cumbersome to use, at least from the CLI. Either extension can be 
used to create single-segment networks with ML2. Currently, ML2 network 
operations return only the providernet attributes 
(provider:network_type, provider:physical_network, and 
provider:segmentation_id) for single-segment networks, and only the 
multiprovidernet attribute (segments) for multi-segment networks. It 
could be argued that all attributes should be returned from all 
operations, with a provider:network_type value of 'multi-segment' 
returned when the network has multiple segments. A blueprint in the 
works for juno that lets each ML2 type driver define whatever segment 
properties make sense for that type may lead to eventual deprecation of 
the providernet extension.


Hope this helps,

-Bob



Salvatore


[1] https://bugs.launchpad.net/openstack-api-site/+bug/1242019


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] how to add a new plugin to the neutron repo?

2014-10-07 Thread Robert Kukura
If you are thinking about adding a new neutron core plugin, please 
consider whether you could accomplish what you need with an ML2 
MechanismDriver instead. Its a lot less code to write, review, and maintain.


-Bob

On 10/6/14 12:51 AM, thanh le giang wrote:


Hi all

I want to add a plugin to the Public Neutron Repository. Although I 
have read the gerrit workflow and Neutron Development page 
(https://wiki.openstack.org/wiki/NeutronDevelopment), I don't know 
clearly how we can add our plugin to neutron repository. Could you 
please share me the process?


Thanks and best regards,


--
L.G.Thanh

Email:legiangt...@gmail.com 
lgth...@fit.hcmus.edu.vn 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Group-based Policy] Database migration chain

2014-10-10 Thread Robert Kukura


On 10/7/14 6:36 PM, Ivar Lazzaro wrote:
I posted a patch that implements the "Different DB Different Chain" 
approach [0].
That does not mean that this approach is the chosen one! It's just to 
have a grasp of what the change looks like.


The "Same DB different chain" solution is much simpler to implement 
(basically you just specify a different version table in the alembic 
environment) so I haven't posted anything for that.


One thing I'm particularly interested about is to hear packagers 
opinions about which approach would be the preferred one: Same DB or 
Different?
It seems to me that deployment tools such as puppet scripts would also 
be simpler if the GBP service plugin used the neutron DB, as there would 
be no need to create a separate DB, set its permissions, put its URL 
into neutron's config file, etc.. All that would be needed at deployment 
time is to run the additional gbp-db-manage tool to perform the GBP DB 
migrations. Am I missing anything?


With dependencies only in one direction, and the foreign keys GBP 
depends on (neutron resource IDs) unlikely to be changed by neutron 
migrations during Kilo, I don't think we need to worry about 
interleaving GBP migrations with neutron migrations. On initial 
deployments or version upgrades, it should be sufficient to run 
gbp-db-manage after neutron-db-manage. On downgrades, some situations 
might require running gbp-db-manage before neutron-db-manage. This seems 
not to be effected by whether the two migration chains are in the same 
DB/schema or different ones.
Also, on the line of Bob's comment in my patch, is there any kind of 
compatibility or performance issue anyone is aware of about in using 
cross schema FKs?
In addition to compatibility and performance, I'm also concerned about 
DB connection management when the same server is using multiple 
connection URLs. I'm not convinced the approach in the patch is 
sufficient. At least with some DBs, wouldn't we we need a separate 
sqlalchemy.create_engine() call with each DB's connection URL, which 
might require using separate context and session objects rather than the 
ones neutron uses?


-Bob


Thanks,
Ivar.

[0] https://review.openstack.org/#/c/126383/

On Mon, Oct 6, 2014 at 11:09 AM, Ivar Lazzaro > wrote:


I believe Group-based Policy (which this thread is about) will
use the Neutron
database configuration for its dependent database.

If Neutron is configured for:
connection = mysql://user:pass@locationX:3306/neutron
then GBP would use:
connection = mysql://user:pass@locationX:3306/neutron_gbp


That's correct, that would be the likely approach if we go with
the "different schema" route.

if you can get the “other database” to be accessible from the
target database via “otherdatabase.sometable”, then you’re in.
from SQLAlchemy’s perspective, it’s just a name with a dot.  
It’s the database itself that has to support the foreign key

at the scope you are shooting for.


I'm experimenting this approach with our code and it seems to be
the case. '
I feel that having the constraint of pointing the same database
connection with a different schema is pretty acceptable given how
tight is GBP to Neutron.

On Sat, Oct 4, 2014 at 8:54 AM, Henry Gessau mailto:ges...@cisco.com>> wrote:

Clint Byrum mailto:cl...@fewbar.com>> wrote:
>
> Excerpts from Mike Bayer's message of 2014-10-04 08:10:38 -0700:
>>
>> On Oct 4, 2014, at 1:10 AM, Kevin Benton mailto:blak...@gmail.com>> wrote:
>>
>>> Does sqlalchemy have good support for cross-database
foreign keys? I was under the impression that they cannot be
implemented with the normal syntax and semantics of an
intra-database foreign-key constraint.
>>
>> cross “database” is not typically portable, but cross
“schema” is.
>>
>> different database vendors have different notions of
“databases” or “schemas”.
>>
>> if you can get the “other database” to be accessible from
the target database via “otherdatabase.sometable”, then you’re in.
>>
>> from SQLAlchemy’s perspective, it’s just a name with a
dot.   It’s the database itself that has to support the
foreign key at the scope you are shooting for.
>>
>
> All true, however, there are zero guarantees that databases
will be
> hosted on the same server, and typically permissions are
setup to prevent
> cross-schema joins.

I believe Group-based Policy (which this thread is about) will
use the Neutron
database configuration for its dependent database.

If Neutron is configured for:
  connection = mysql://user:pass@locationX:3306/neutron
then GBP would use:
  connecti

Re: [openstack-dev] [Neutron] Why doesn't ml2-ovs work when it's "host" != the dhcp agent's host?

2014-10-20 Thread Robert Kukura

Hi Noel,

The ML2 plugin uses the binding:host_id attribute of port to control 
port binding. For compute ports, nova sets binding:host_id when 
creating/updating the neutron port, and ML2's openvswitch mechanism 
driver will look in agents_db to make sure the openvswitch L2 agent is 
running on that host, and that it has a bridge mapping for any needed 
physical network or has the appropriate tunnel type enabled. The 
binding:host_id attribute also gets set on DHCP, L3, and other agents' 
ports, and must match the host of the openvswitch-agent on that node or 
ML2 will not be able to bind the port. I suspect your configuration may 
be resulting in these not matching, and the DHCP port's binding:vif_type 
attribute being 'binding_failed'.


I'd suggest running "neutron port-show" as admin on the DHCP port to see 
what the values of binding_vif_type and binding:host_id are, and running 
"neutron agent-list" as admin to make sure there is an L2 agent on that 
node and maybe "neutron agent-show" as admin to get that agents config 
details.


-Bob


On 10/20/14 1:28 PM, Noel Burton-Krahn wrote:
I'm running OpenStack Icehouse with Neutron ML2/OVS.  I've configured 
the ml2-ovs-plugin on all nodes with host = the IP of the host 
itself.  However, my dhcp-agent may float from host to host for 
failover, so I configured it with host="floating".  That doesn't 
work.  In this case, the ml2-ovs-plugin creates a namespace and a tap 
interface for the dhcp agent, but OVS doesn't route any traffic to the 
dhcp agent.  It *does* work if the dhcp agent's host is the same as 
the ovs plugin's host, but if my dhcp agent migrates to another host, 
it loses its configuration since it now has a different host name.


So my question is, what does host mean for the ML2 dhcp agent and host 
can I get it to work if the dhcp agent's host != host for the ovs plugin?


Case 1: fails: running with dhcp agent's host = "floating", ovs 
plugin's host = IP-of-server

dhcp agent is running in netns created by ovs-plugin
dhcp agent never receives network traffic

Case 2: ok: running with dhcp agent's host = ovs plugin's host = 
IP-of-server
dhcp agent is running in netns created by ovs-plugin (different tap 
name than case 1)

dhcp agent works

--
Noel





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][ML2] no ML2 IRC meeting today

2014-11-12 Thread Robert Kukura
Let's skip the ML2 IRC meeting this week, while some people are still 
traveling and/or recovering. Next week I hope to have good discussions 
regarding a common ML2 driver repo vs. separate repos per vendor, as 
well as the ML2 BPs for Kilo.


-Bob


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-05 Thread Robert Kukura

On 8/4/14, 4:27 PM, Mark McClain wrote:

All-

tl;dr

* Group Based Policy API is the kind of experimentation we be should 
attempting.

* Experiments should be able to fail fast.
* The master branch does not fail fast.
* StackForge is the proper home to conduct this experiment.
The disconnect here is that the Neutron group-based policy sub-team that 
has been implementing this feature for Juno does not see this work as an 
experiment to gather data, but rather as an important innovative feature 
to put in the hands of early adopters in Juno and into widespread 
deployment with a stable API as early as Kilo.


The group-based policy BP approved for Juno addresses the critical need 
for a more usable, declarative, intent-based interface for cloud 
application developers and deployers, that can co-exist with Neutron's 
current networking-hardware-oriented API and work nicely with all 
existing core plugins. Additionally, we believe that this declarative 
approach is what is needed to properly integrate advanced services into 
Neutron, and will go a long way towards resolving the difficulties so 
far trying to integrate LBaaS, FWaaS, and VPNaaS APIs into the current 
Neutron model.


Like any new service API in Neutron, the initial group policy API 
release will be subject to incompatible changes before being declared 
"stable", and hence would be labeled "experimental" in Juno. This does 
not mean that it is an experiment where to "fail fast" is an acceptable 
outcome. The sub-team's goal is to stabilize the group policy API as 
quickly as possible,  making any needed changes based on early user and 
operator experience.


The L and M cycles that Mark suggests below to "revisit the status" are 
a completely different time frame. By the L or M cycle, we should be 
working on a new V3 Neutron API that pulls these APIs together into a 
more cohesive core API. We will not be in a position to do this properly 
without the experience of using the proposed group policy extension with 
the V2 Neutron API in production.


If we were failing miserably, or if serious technical issues were being 
identified with the patches, some delay might make sense. But, other 
than Mark's -2 blocking the initial patches from merging, we are on 
track to complete the planned work in Juno.


-Bob



Why this email?
---
Our community has been discussing and working on Group Based Policy 
(GBP) for many months.  I think the discussion has reached a point 
where we need to openly discuss a few issues before moving forward.  I 
recognize that this discussion could create frustration for those who 
have invested significant time and energy, but the reality is we need 
ensure we are making decisions that benefit all members of our 
community (users, operators, developers and vendors).


Experimentation

I like that as a community we are exploring alternate APIs.  The 
process of exploring via real user experimentation can produce 
valuable results.  A good experiment should be designed to fail fast 
to enable further trials via rapid iteration.


Merging large changes into the master branch is the exact opposite of 
failing fast.


The master branch deliberately favors small iterative changes over 
time.  Releasing a new version of the proposed API every six months 
limits our ability to learn and make adjustments.


In the past, we've released LBaaS, FWaaS, and VPNaaS as experimental 
APIs.  The results have been very mixed as operators either shy away 
from testing/offering the API or embrace the API with the expectation 
that the community will provide full API support and migration.  In 
both cases, the experiment fails because we either could not get the 
data we need or are unable to make significant changes without 
accepting a non-trivial amount of technical debt via migrations or 
draft API support.


Next Steps
--
Previously, the GPB subteam used a Github account to host the 
development, but the workflows and tooling do not align with 
OpenStack's development model. I'd like to see us create a group based 
policy project in StackForge.  StackForge will host the code and 
enable us to follow the same open review and QA processes we use in 
the main project while we are developing and testing the API. The 
infrastructure there will benefit us as we will have a separate review 
velocity and can frequently publish libraries to PyPI.  From a 
technical perspective, the 13 new entities in GPB [1] do not require 
any changes to internal Neutron data structures.  The docs[2] also 
suggest that an external plugin or service would work to make it 
easier to speed development.


End State
-
APIs require time to fully bake and right now it is too early to know 
the final outcome.  Using StackForge will allow the team to retain all 
of its options including: merging the code into Neutron, adopting the 
repository as sub-project of the Network Program, leaving the project 
in StackForge project or 

Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-05 Thread Robert Kukura


On 8/5/14, 11:04 AM, Gary Kotton wrote:

Hi,
Is there any description of how this will be consumed by Nova. My 
concern is this code landing there.

Hi Gary,

Initially, an endpoint's port_id is passed to Nova using "nova boot ... 
--nic port-id= ...", requiring no changes to Nova. Later, 
slight enhancements to Nova would allow using commands such as "nova 
boot ... --nic ep-id= ..." or "nova boot ... --nic 
epg-id= ...".


-Bob

Thanks
Gary

From: Robert Kukura <mailto:kuk...@noironetworks.com>>
Reply-To: OpenStack List <mailto:openstack-dev@lists.openstack.org>>

Date: Tuesday, August 5, 2014 at 5:20 PM
To: OpenStack List <mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron] Group Based Policy and the way 
forward


On 8/4/14, 4:27 PM, Mark McClain wrote:

All-

tl;dr

* Group Based Policy API is the kind of experimentation we be should 
attempting.

* Experiments should be able to fail fast.
* The master branch does not fail fast.
* StackForge is the proper home to conduct this experiment.
The disconnect here is that the Neutron group-based policy sub-team 
that has been implementing this feature for Juno does not see this 
work as an experiment to gather data, but rather as an important 
innovative feature to put in the hands of early adopters in Juno and 
into widespread deployment with a stable API as early as Kilo.


The group-based policy BP approved for Juno addresses the critical 
need for a more usable, declarative, intent-based interface for cloud 
application developers and deployers, that can co-exist with Neutron's 
current networking-hardware-oriented API and work nicely with all 
existing core plugins. Additionally, we believe that this declarative 
approach is what is needed to properly integrate advanced services 
into Neutron, and will go a long way towards resolving the 
difficulties so far trying to integrate LBaaS, FWaaS, and VPNaaS APIs 
into the current Neutron model.


Like any new service API in Neutron, the initial group policy API 
release will be subject to incompatible changes before being declared 
"stable", and hence would be labeled "experimental" in Juno. This does 
not mean that it is an experiment where to "fail fast" is an 
acceptable outcome. The sub-team's goal is to stabilize the group 
policy API as quickly as possible,  making any needed changes based on 
early user and operator experience.


The L and M cycles that Mark suggests below to "revisit the status" 
are a completely different time frame. By the L or M cycle, we should 
be working on a new V3 Neutron API that pulls these APIs together into 
a more cohesive core API. We will not be in a position to do this 
properly without the experience of using the proposed group policy 
extension with the V2 Neutron API in production.


If we were failing miserably, or if serious technical issues were 
being identified with the patches, some delay might make sense. But, 
other than Mark's -2 blocking the initial patches from merging, we are 
on track to complete the planned work in Juno.


-Bob



Why this email?
---
Our community has been discussing and working on Group Based Policy 
(GBP) for many months.  I think the discussion has reached a point 
where we need to openly discuss a few issues before moving forward. 
 I recognize that this discussion could create frustration for those 
who have invested significant time and energy, but the reality is we 
need ensure we are making decisions that benefit all members of our 
community (users, operators, developers and vendors).


Experimentation

I like that as a community we are exploring alternate APIs.  The 
process of exploring via real user experimentation can produce 
valuable results.  A good experiment should be designed to fail fast 
to enable further trials via rapid iteration.


Merging large changes into the master branch is the exact opposite of 
failing fast.


The master branch deliberately favors small iterative changes over 
time.  Releasing a new version of the proposed API every six months 
limits our ability to learn and make adjustments.


In the past, we've released LBaaS, FWaaS, and VPNaaS as experimental 
APIs.  The results have been very mixed as operators either shy away 
from testing/offering the API or embrace the API with the expectation 
that the community will provide full API support and migration.  In 
both cases, the experiment fails because we either could not get the 
data we need or are unable to make significant changes without 
accepting a non-trivial amount of technical debt via migrations or 
draft API support.


Next Steps
--
Previously, the GPB subteam used a Github account to host the 
development, but the workflows and tooling do not align with 
OpenStack's development model. I'd like to see us create a group 
based policy proj

Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-05 Thread Robert Kukura


On 8/5/14, 1:23 PM, Gary Kotton wrote:
Ok, thanks for the clarification. This means that it will not be done 
automagically as it is today -- the tenant will need to create a 
Neutron port and then pass that through.
Not quite. Using the group policy API, the port will be created 
implicitly when the endpoint is created (unless an existing port_id is 
passed explicitly). All the user will need to do is obtain the port_id 
value from the endpoint and pass this to nova.


The goal is to make passing "--nic epg-id=" just as 
automatic as passing "--nic net-id=". Code in Nova's 
Neutron integration would handle the epg-id by passing it to 
create_endpoint, and then using the port_id that is returned in the result.


-Bob

Thanks
Gary

From: Robert Kukura <mailto:kuk...@noironetworks.com>>
Reply-To: OpenStack List <mailto:openstack-dev@lists.openstack.org>>

Date: Tuesday, August 5, 2014 at 8:13 PM
To: OpenStack List <mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron] Group Based Policy and the way 
forward



On 8/5/14, 11:04 AM, Gary Kotton wrote:

Hi,
Is there any description of how this will be consumed by Nova. My 
concern is this code landing there.

Hi Gary,

Initially, an endpoint's port_id is passed to Nova using "nova boot 
... --nic port-id= ...", requiring no changes to Nova. 
Later, slight enhancements to Nova would allow using commands such as 
"nova boot ... --nic ep-id= ..." or "nova boot ... 
--nic epg-id= ...".


-Bob

Thanks
Gary

From: Robert Kukura <mailto:kuk...@noironetworks.com>>
Reply-To: OpenStack List <mailto:openstack-dev@lists.openstack.org>>

Date: Tuesday, August 5, 2014 at 5:20 PM
To: OpenStack List <mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron] Group Based Policy and the way 
forward


On 8/4/14, 4:27 PM, Mark McClain wrote:

All-

tl;dr

* Group Based Policy API is the kind of experimentation we be should 
attempting.

* Experiments should be able to fail fast.
* The master branch does not fail fast.
* StackForge is the proper home to conduct this experiment.
The disconnect here is that the Neutron group-based policy sub-team 
that has been implementing this feature for Juno does not see this 
work as an experiment to gather data, but rather as an important 
innovative feature to put in the hands of early adopters in Juno and 
into widespread deployment with a stable API as early as Kilo.


The group-based policy BP approved for Juno addresses the critical 
need for a more usable, declarative, intent-based interface for cloud 
application developers and deployers, that can co-exist with 
Neutron's current networking-hardware-oriented API and work nicely 
with all existing core plugins. Additionally, we believe that this 
declarative approach is what is needed to properly integrate advanced 
services into Neutron, and will go a long way towards resolving the 
difficulties so far trying to integrate LBaaS, FWaaS, and VPNaaS APIs 
into the current Neutron model.


Like any new service API in Neutron, the initial group policy API 
release will be subject to incompatible changes before being declared 
"stable", and hence would be labeled "experimental" in Juno. This 
does not mean that it is an experiment where to "fail fast" is an 
acceptable outcome. The sub-team's goal is to stabilize the group 
policy API as quickly as possible,  making any needed changes based 
on early user and operator experience.


The L and M cycles that Mark suggests below to "revisit the status" 
are a completely different time frame. By the L or M cycle, we should 
be working on a new V3 Neutron API that pulls these APIs together 
into a more cohesive core API. We will not be in a position to do 
this properly without the experience of using the proposed group 
policy extension with the V2 Neutron API in production.


If we were failing miserably, or if serious technical issues were 
being identified with the patches, some delay might make sense. But, 
other than Mark's -2 blocking the initial patches from merging, we 
are on track to complete the planned work in Juno.


-Bob



Why this email?
---
Our community has been discussing and working on Group Based Policy 
(GBP) for many months.  I think the discussion has reached a point 
where we need to openly discuss a few issues before moving forward. 
 I recognize that this discussion could create frustration for those 
who have invested significant time and energy, but the reality is we 
need ensure we are making decisions that benefit all members of our 
community (users, operators, developers and vendors).


Experimentation

I like that as a community we are exploring alternate APIs.  The 
process of exploring via real user experimentation can produce 
valuable results.  A good experi

Re: [openstack-dev] [Neutron] tempest failure: No more IP addresses available on network

2013-10-23 Thread Robert Kukura
On 10/23/2013 07:22 PM, Nachi Ueno wrote:
> Hi folks
> 
> This patch was the culprit, so we have reverted it.
> https://review.openstack.org/#/c/53459/1 (Note, this is revert of revert )
> 
> However even if it is reverted, the error happens
> http://logstash.openstack.org/index.html#eyJzZWFyY2giOiJcIklwQWRkcmVzc0dlbmVyYXRpb25GYWlsdXJlQ2xpZW50OiBObyBtb3JlIElQIGFkZHJlc3NlcyBhdmFpbGFibGUgb24gbmV0d29yayBcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiODY0MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxMzgyNTcwMTcxMjQ3fQ==
> 
> I tested tempest in local, it works with the patch and without patch.
> # As you know, this is a timing issue.. so this doesn't mean there is no 
> issue..
> 
> I'm reading logs, and I found some exceptions.
> https://etherpad.openstack.org/p/debug1243726
> 
> It looks like issue in IPAllocationRange.
> 
> My next culprit is the delete_subnet in ML2
> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L371
> 
>> Kyle, Bob
> Do you have any thought on this?

Hi Nachi,

Are you suggesting with_lockmode('update') is needed on the queries used
to find the ports and subnets to auto-delete in ML2's delete_network()
and delete_subnet()? I had tried that, but backed it out due to the
postgres issue with joins. I could be wrong, but I also came to the
conclusion that locking for update locks the rows returned from the
select, not the whole table, and thus would not prevent new rows from
being added concurrently with the transaction.

Or do you have another theory on what is going wrong?

Also, is this the same issue we'd been hitting for a while with the
openvswitch plugin, or is it only since we switched devstack to default
to ml2?

-Bob

> 
> Best
> Nachi
> 
> 
> 2013/10/23 Terry Wilson :
>>> Hi,
>>>
>>> I just noticed several number of neutron check
>>> (tempest-devstack-vm-neutron-isolated) fails with the same error:
>>> "No more IP addresses available on network".
>>> This errors suddenly starts to occur from yesterday.
>>>
>>> I checked there is any commit merge around the time when this failure
>>> started,
>>> but there is no commit around the time.
>>> I am not sure it is a temporary issue.
>>>
>>> I files a bug in neutron: https://bugs.launchpad.net/neutron/+bug/1243726
>>>
>>> It is late in my timezone. I hope someone jumps into this issue.
>>>
>>>
>>> logstash stats is here:
>>> http://logstash.openstack.org/index.html#eyJzZWFyY2giOiJcIklwQWRkcmVzc0dlbmVyYXRpb25GYWlsdXJlQ2xpZW50OiBObyBtb3JlIElQIGFkZHJlc3NlcyBhdmFpbGFibGUgb24gbmV0d29yayBcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM4MjUzNzk0OTgxMH0=
>>>
>>> Thanks,
>>> Akihiro
>>
>> Yes. This is still happening on my patch (and many others) as well.
>>
>> Terry
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Race condition between DB layer and plugin back-end implementation

2013-11-18 Thread Robert Kukura
On 11/18/2013 03:25 PM, Edgar Magana wrote:
> Developers,
> 
> This topic has been discussed before but I do not remember if we have a
> good solution or not.

The ML2 plugin addresses this by calling each MechanismDriver twice. The
create_network_precommit() method is called as part of the DB
transaction, and the create_network_postcommit() method is called after
the transaction has been committed. Interactions with devices or
controllers are done in the postcommit methods. If the postcommit method
raises an exception, the plugin deletes that partially-created resource
and returns the exception to the client. You might consider a similar
approach in your plugin.

-Bob

> Basically, if concurrent API calls are sent to Neutron, all of them are
> sent to the plug-in level where two actions have to be made:
> 
> 1. DB transaction – No just for data persistence but also to collect the
> information needed for the next action
> 2. Plug-in back-end implementation – In our case is a call to the python
> library than consequentially calls PLUMgrid REST GW (soon SAL)
> 
> For instance:
> 
> def create_port(self, context, port):
> with context.session.begin(subtransactions=True):
> # Plugin DB - Port Create and Return port
> port_db = super(NeutronPluginPLUMgridV2,
> self).create_port(context,
>port)
> device_id = port_db["device_id"]
> if port_db["device_owner"] == "network:router_gateway":
> router_db = self._get_router(context, device_id)
> else:
> router_db = None
> try:
> LOG.debug(_("PLUMgrid Library: create_port() called"))
> # Back-end implementation
> self._plumlib.create_port(port_db, router_db)
> except Exception:
> …
> 
> The way we have implemented at the plugin-level in Havana (even in
> Grizzly) is that both action are wrapped in the same "transaction" which
> automatically rolls back any operation done to its original state
> protecting mostly the DB of having any inconsistency state or left over
> data if the back-end part fails.=.
> The problem that we are experiencing is when concurrent calls to the
> same API are sent, the number of operation at the plug-in back-end are
> long enough to make the next concurrent API call to get stuck at the DB
> transaction level, which creates a hung state for the Neutron Server to
> the point that all concurrent API calls will fail.
> 
> This can be fixed if we include some "locking" system such as calling:
> 
> from neutron.common import utile
> …
> 
> @utils.synchronized('any-name', external=True)
> def create_port(self, context, port):
> …
> 
> Obviously, this will create a serialization of all concurrent calls
> which will ends up in having a really bad performance. Does anyone has a
> better solution?
> 
> Thanks,
> 
> Edgar
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Race condition between DB layer and plugin back-end implementation

2013-11-18 Thread Robert Kukura
On 11/18/2013 05:21 PM, Edgar Magana wrote:
> Hi All,
> 
> Thank you everybody for your input. It is clear that any solution requires
> changes at the plugin level (we were trying to avoid that). So, I am
> wondering if a re-factor of this code is needed of not (maybe not).
> The ML2 solution is probably the best alternative right now, so we may go
> for it.

Could be a good time to consider converting the plugin to an ML2
MechanismDriver. I'm happy to help work through the details of that if
your are interested.

-Bob

> 
> Any extra input is welcome!
> 
> Thanks,
> 
> Edgar
> 
> On 11/18/13 12:55 PM, "Robert Kukura"  wrote:
> 
>> On 11/18/2013 03:25 PM, Edgar Magana wrote:
>>> Developers,
>>>
>>> This topic has been discussed before but I do not remember if we have a
>>> good solution or not.
>>
>> The ML2 plugin addresses this by calling each MechanismDriver twice. The
>> create_network_precommit() method is called as part of the DB
>> transaction, and the create_network_postcommit() method is called after
>> the transaction has been committed. Interactions with devices or
>> controllers are done in the postcommit methods. If the postcommit method
>> raises an exception, the plugin deletes that partially-created resource
>> and returns the exception to the client. You might consider a similar
>> approach in your plugin.
>>
>> -Bob
>>
>>> Basically, if concurrent API calls are sent to Neutron, all of them are
>>> sent to the plug-in level where two actions have to be made:
>>>
>>> 1. DB transaction ­ No just for data persistence but also to collect the
>>> information needed for the next action
>>> 2. Plug-in back-end implementation ­ In our case is a call to the python
>>> library than consequentially calls PLUMgrid REST GW (soon SAL)
>>>
>>> For instance:
>>>
>>> def create_port(self, context, port):
>>> with context.session.begin(subtransactions=True):
>>> # Plugin DB - Port Create and Return port
>>> port_db = super(NeutronPluginPLUMgridV2,
>>> self).create_port(context,
>>> 
>>> port)
>>> device_id = port_db["device_id"]
>>> if port_db["device_owner"] == "network:router_gateway":
>>> router_db = self._get_router(context, device_id)
>>> else:
>>> router_db = None
>>> try:
>>> LOG.debug(_("PLUMgrid Library: create_port() called"))
>>> # Back-end implementation
>>> self._plumlib.create_port(port_db, router_db)
>>> except Exception:
>>> Š
>>>
>>> The way we have implemented at the plugin-level in Havana (even in
>>> Grizzly) is that both action are wrapped in the same "transaction" which
>>> automatically rolls back any operation done to its original state
>>> protecting mostly the DB of having any inconsistency state or left over
>>> data if the back-end part fails.=.
>>> The problem that we are experiencing is when concurrent calls to the
>>> same API are sent, the number of operation at the plug-in back-end are
>>> long enough to make the next concurrent API call to get stuck at the DB
>>> transaction level, which creates a hung state for the Neutron Server to
>>> the point that all concurrent API calls will fail.
>>>
>>> This can be fixed if we include some "locking" system such as calling:
>>>
>>> from neutron.common import utile
>>> Š
>>>
>>> @utils.synchronized('any-name', external=True)
>>> def create_port(self, context, port):
>>> Š
>>>
>>> Obviously, this will create a serialization of all concurrent calls
>>> which will ends up in having a really bad performance. Does anyone has a
>>> better solution?
>>>
>>> Thanks,
>>>
>>> Edgar
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Revert Pass instance host-id to Quantum using port bindings extension.

2013-07-12 Thread Robert Kukura
On 07/11/2013 04:30 PM, Aaron Rosen wrote:
> Hi, 
> 
> I think we should revert this patch that was added here
> (https://review.openstack.org/#/c/29767/). What this patch does is when
> nova-compute calls into quantum to create the port it passes in the
> hostname on which the instance was booted on. The idea of the patch was
> that providing this information would "allow hardware device vendors
> management stations to allow them to segment the network in a more
> precise manager (for example automatically trunk the vlan on the
> physical switch port connected to the compute node on which the vm
> instance was started)."
> 
> In my opinion I don't think this is the right approach. There are
> several other ways to get this information of where a specific port
> lives. For example, in the OVS plugin case the agent running on the
> nova-compute node can update the port in quantum to provide this
> information. Alternatively, quantum could query nova using the
> port.device_id to determine which server the instance is on. 
> 
> My motivation for removing this code is I now have the free cycles to
> work on
> https://blueprints.launchpad.net/nova/+spec/nova-api-quantum-create-port
>  discussed here
> (http://lists.openstack.org/pipermail/openstack-dev/2013-May/009088.html)  .
> This was about moving the quantum port creation from the nova-compute
> host to nova-api if a network-uuid is passed in. This will allow us to
> remove all the quantum logic from the nova-compute nodes and
> simplify orchestration. 
> 
> Thoughts? 

Aaron,

The ml2-portbinding BP I am currently working on depends on nova setting
the binding:host_id attribute on a port before accessing
binding:vif_type. The ml2 plugin's MechanismDrivers will use the
binding:host_id with the agents_db info to see what (if any) L2 agent is
running on that host, or what other networking mechanisms might provide
connectivity for that host. Based on this, the port's binding:vif_type
will be set to the appropriate type for that agent/mechanism.

When an L2 agent is involved, the associated ml2 MechanismDriver will
use the agent's interface or bridge mapping info to determine whether
the agent on that host can connect to any of the port's network's
segments, and select the specific segment (network_type,
physical_network, segmentation_id) to be used. If there is no
connectivity possible on the host (due to either no L2 agent or other
applicable mechanism, or no mapping for any of the network's segment's
physical_networks), the ml2 plugin will set the binding:vif_type
attribute to BINDING_FAILED. Nova will then be able to gracefully put
the instance into an error state rather than have the instance boot
without the required connectivity.

I don't see any problem with nova creating the port before scheduling it
to a specific host, but the binding:host_id needs to be set before the
binding:vif_type attribute is accessed. Note that the host needs to be
determined before the vif_type can be determined, so it is not possible
to rely on the agent discovering the VIF, which can't be created until
the vif_type is determined.

Back when the port binding extension was originally being hashed out, I
had suggested using an explicit bind() operation on port that took the
host_id as a parameter and returned the vif_type as a result. But the
current attribute-based approach was chosen instead. We could consider
adding a bind() operation for the next neutron API revision, but I don't
see any reason the current attribute-based binding approach cannot work
for now.

-Bob

> 
> Best, 
> 
> Aaron
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Revert Pass instance host-id to Quantum using port bindings extension.

2013-07-14 Thread Robert Kukura
On 07/12/2013 04:17 PM, Aaron Rosen wrote:
> Hi, 
> 
> 
> On Fri, Jul 12, 2013 at 6:47 AM, Robert Kukura  <mailto:rkuk...@redhat.com>> wrote:
> 
> On 07/11/2013 04:30 PM, Aaron Rosen wrote:
> > Hi,
> >
> > I think we should revert this patch that was added here
> > (https://review.openstack.org/#/c/29767/). What this patch does is
> when
> > nova-compute calls into quantum to create the port it passes in the
> > hostname on which the instance was booted on. The idea of the
> patch was
> > that providing this information would "allow hardware device vendors
> > management stations to allow them to segment the network in a more
> > precise manager (for example automatically trunk the vlan on the
> > physical switch port connected to the compute node on which the vm
> > instance was started)."
> >
> > In my opinion I don't think this is the right approach. There are
> > several other ways to get this information of where a specific port
> > lives. For example, in the OVS plugin case the agent running on the
> > nova-compute node can update the port in quantum to provide this
> > information. Alternatively, quantum could query nova using the
> > port.device_id to determine which server the instance is on.
> >
> > My motivation for removing this code is I now have the free cycles to
> > work on
> >
> https://blueprints.launchpad.net/nova/+spec/nova-api-quantum-create-port
> >  discussed here
> >
> (http://lists.openstack.org/pipermail/openstack-dev/2013-May/009088.html)
>  .
> > This was about moving the quantum port creation from the nova-compute
> > host to nova-api if a network-uuid is passed in. This will allow us to
> > remove all the quantum logic from the nova-compute nodes and
> > simplify orchestration.
> >
> > Thoughts?
> 
> Aaron,
> 
> The ml2-portbinding BP I am currently working on depends on nova setting
> the binding:host_id attribute on a port before accessing
> binding:vif_type. The ml2 plugin's MechanismDrivers will use the
> binding:host_id with the agents_db info to see what (if any) L2 agent is
> running on that host, or what other networking mechanisms might provide
> connectivity for that host. Based on this, the port's binding:vif_type
> will be set to the appropriate type for that agent/mechanism.
> 
> When an L2 agent is involved, the associated ml2 MechanismDriver will
> use the agent's interface or bridge mapping info to determine whether
> the agent on that host can connect to any of the port's network's
> segments, and select the specific segment (network_type,
> physical_network, segmentation_id) to be used. If there is no
> connectivity possible on the host (due to either no L2 agent or other
> applicable mechanism, or no mapping for any of the network's segment's
> physical_networks), the ml2 plugin will set the binding:vif_type
> attribute to BINDING_FAILED. Nova will then be able to gracefully put
> the instance into an error state rather than have the instance boot
> without the required connectivity.
> 
> I don't see any problem with nova creating the port before scheduling it
> to a specific host, but the binding:host_id needs to be set before the
> binding:vif_type attribute is accessed. Note that the host needs to be
> determined before the vif_type can be determined, so it is not possible
> to rely on the agent discovering the VIF, which can't be created until
> the vif_type is determined.
> 
> 
> So what your saying is the current workflow is this: nova-compute
> creates a port in quantum passing in the host-id (which is the hostname
> of the compute host). Now quantum looks in the agent table in it's
> database to determine the VIF type that should be used based on the
> agent that is running on the nova-compute node? 

Most plugins just return a hard-wired value for binding:vif_type. The
ml2 plugin supports heterogeneous deployments, and therefore needs more
flexibility, so this is whats being implemented in the agent-based ml2
mechanism drivers. Other mechanism drivers (i.e. controller-based) would
work differently. In addition to VIF type selection, port binding in ml2
also involves determining if connectivity is possible, and selecting the
network segment to use, and these are also based on binding:host_id.

>  My question would be why
> the nova-compute node doesn't already 

Re: [openstack-dev] Revert Pass instance host-id to Quantum using port bindings extension.

2013-07-15 Thread Robert Kukura
On 07/15/2013 03:54 PM, Aaron Rosen wrote:
> 
> 
> 
> On Sun, Jul 14, 2013 at 6:48 PM, Robert Kukura  <mailto:rkuk...@redhat.com>> wrote:
> 
> On 07/12/2013 04:17 PM, Aaron Rosen wrote:
> > Hi,
> >
> >
> > On Fri, Jul 12, 2013 at 6:47 AM, Robert Kukura  <mailto:rkuk...@redhat.com>
> > <mailto:rkuk...@redhat.com <mailto:rkuk...@redhat.com>>> wrote:
> >
> > On 07/11/2013 04:30 PM, Aaron Rosen wrote:
> > > Hi,
> > >
> > > I think we should revert this patch that was added here
> > > (https://review.openstack.org/#/c/29767/). What this patch
> does is
> > when
> > > nova-compute calls into quantum to create the port it passes
> in the
> > > hostname on which the instance was booted on. The idea of the
> > patch was
> > > that providing this information would "allow hardware device
> vendors
> > > management stations to allow them to segment the network in
> a more
> > > precise manager (for example automatically trunk the vlan on the
> > > physical switch port connected to the compute node on which
> the vm
> > > instance was started)."
> > >
> > > In my opinion I don't think this is the right approach.
> There are
> > > several other ways to get this information of where a
> specific port
> > > lives. For example, in the OVS plugin case the agent running
> on the
> > > nova-compute node can update the port in quantum to provide this
> > > information. Alternatively, quantum could query nova using the
> > > port.device_id to determine which server the instance is on.
> > >
> > > My motivation for removing this code is I now have the free
> cycles to
> > > work on
> > >
> >
> https://blueprints.launchpad.net/nova/+spec/nova-api-quantum-create-port
> > >  discussed here
> > >
> >
> (http://lists.openstack.org/pipermail/openstack-dev/2013-May/009088.html)
> >  .
> > > This was about moving the quantum port creation from the
> nova-compute
> > > host to nova-api if a network-uuid is passed in. This will
> allow us to
> > > remove all the quantum logic from the nova-compute nodes and
> > > simplify orchestration.
> > >
> > > Thoughts?
> >
> > Aaron,
> >
> > The ml2-portbinding BP I am currently working on depends on
> nova setting
> > the binding:host_id attribute on a port before accessing
> > binding:vif_type. The ml2 plugin's MechanismDrivers will use the
> > binding:host_id with the agents_db info to see what (if any)
> L2 agent is
> > running on that host, or what other networking mechanisms
> might provide
> > connectivity for that host. Based on this, the port's
> binding:vif_type
> > will be set to the appropriate type for that agent/mechanism.
> >
> > When an L2 agent is involved, the associated ml2
> MechanismDriver will
> > use the agent's interface or bridge mapping info to determine
> whether
> > the agent on that host can connect to any of the port's network's
> > segments, and select the specific segment (network_type,
> > physical_network, segmentation_id) to be used. If there is no
> > connectivity possible on the host (due to either no L2 agent
> or other
> > applicable mechanism, or no mapping for any of the network's
> segment's
> > physical_networks), the ml2 plugin will set the binding:vif_type
> > attribute to BINDING_FAILED. Nova will then be able to
> gracefully put
> > the instance into an error state rather than have the instance
> boot
> > without the required connectivity.
> >
> > I don't see any problem with nova creating the port before
> scheduling it
> > to a specific host, but the binding:host_id needs to be set
> before the
> > binding:vif_type attribute is accessed. Note that the host
> needs to be
> > determined before the vif_type can be determined, so it is not
> possible
> &

Re: [openstack-dev] [Neutron] Proposal to add new neutron-core members

2013-07-23 Thread Robert Kukura
On 07/23/2013 03:15 PM, Mark McClain wrote:
> All-
> 
> I'd like to propose that Kyle Mestery and Armando Migliaccio be added to the 
> Neutron core team.  Both have been very active with valuable reviews and 
> contributions to the Neutron community.
> 
> Neutron core team members please respond with +1/0/-1.

+1 for each!

-Bob

> 
> mark
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] New Plug-in development for Neutron

2013-09-10 Thread Robert Kukura
On 09/10/2013 10:45 AM, Mark McClain wrote:
> I think Gary and Kyle have answered this very well; however I do have a few 
> things to add.  It is definitely too late for Havana, so Icehouse is next 
> available release for new plugins.  I can work with you offline to find you a 
> core sponsor.

One more thing: Rather than implementing and maintaining an entire new
plugin, consider whether it might make sense to integrate as an ml2
MechanismDriver instead. The ml2 plugin's MechanismDrivers can
interface with network devices or controllers, and can work in
conjunction with the existing L2 agents if needed. See
https://wiki.openstack.org/wiki/Neutron/ML2 for more info.

-Bob

> 
> mark
>  
> On Sep 10, 2013, at 9:37 AM, "Kyle Mestery (kmestery)"  
> wrote:
> 
>> On Sep 10, 2013, at 4:05 AM, P Balaji-B37839  wrote:
>>>
>>> Hi,
>>>
>>> We have gone through the below link for new plug-in development.
>>>
>>> https://wiki.openstack.org/wiki/NeutronDevelopment#Developing_a_Neutron_Plugin
>>>
>>> Just want to confirm that is it mandatory to be Core Neutron Developer for 
>>> submitting a new plug-in.?
>>>
>> It's not necessary for you to have a core developer from your company, but 
>> you will need an existing core developer to support your plugin upstream. 
>> When you file the blueprint, let us know and we'll work with you on this one 
>> Balaji.
>>
>> Thanks,
>> Kyle
>>
>>> How do we get a reviewer for this like Can we approach any Core Neutron 
>>> developer for review our plugin?
>>>
>>> We are developing new plug-in for our product and want to make it upstream 
>>> to Neutron Core.
>>>
>>> Any information on this will be helpful!.
>>>
>>> Regards,
>>> Balaji.P
>>
>>
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron]Relationship between physical networks and segment

2016-03-29 Thread Robert Kukura
My answers below are from the perspective of normal (non-routed) 
networks implemented in ML2. The support for routed networks should 
build on this without breaking it.


On 3/29/16 3:38 PM, Miguel Lavalle wrote:

Hi,

I am writing a patchset to build a mapping between hosts and network 
segments. The goal of this mapping is to be able to say whether a host 
has access to a given network segment. I am building this mapping 
assuming that if a host A has a bridges mapping containing 'physnet 1' 
and a segment has 'physnet 1' in its 'physical_network' attribute, 
then the host has access to that segment.


1) Is this assumption correct? Looking at method 
check_segment_for_agent in 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2/drivers/mech_agent.py#n180 
seems to me to suggest that my assumption is correct?
This is true for certain agent-based mechanism drivers, but cannot be 
assumed to be the case for all mechanism drivers (even all those that 
use agents. Any use of mapping info (i.e. from agents_db or elsewhere) 
is specific to an individual mechanism driver. I'd strongly recommend 
that any functionality trying to make decisions based on connectivity do 
so by calling into the registered mechanism drivers, so they can decide 
whether whatever they manage has connectivity.


Also note that connectivity may involve hierarchical port binding, in 
which case you really need to try to bind a port to determine if you 
have connectivity. I'm not suggesting that there is a requirement to mix 
HPB and routed networks, but please try not to build assumptions into 
ML2 plugin code that don't work with HPB or that are only valid for a 
subset of mechanism drivers.


2) Furthermore, when a segment is mapped to a physical network, is 
there a one to one relationship between segments and physical nets?
Certainly different virtual networks can map to different segments (i.e. 
VLANs) on the same physical network. It is even possible for the same 
virtual network to have multiple segments on the same physical network.


-Bob


Thanks


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Social at the summit

2016-04-25 Thread Robert Kukura

+1

On 4/25/16 1:07 PM, Kyle Mestery wrote:

OK, there is enough interest, I'll find a place on 6th Street for us
and get a reservation for Thursday around 7 or so.

Thanks folks!

On Mon, Apr 25, 2016 at 12:30 PM, Zhou, Han  wrote:

+1 :)

Han Zhou
Irc: zhouhan


On Monday, April 25, 2016, Korzeniewski, Artur
 wrote:

Sign me up :)

Artur
IRC: korzen

-Original Message-
From: Darek Smigiel [mailto:smigiel.dari...@gmail.com]
Sent: Monday, April 25, 2016 7:19 PM
To: OpenStack Development Mailing List (not for usage questions)

Subject: Re: [openstack-dev] [neutron] Social at the summit

Count me in!
Will be good to meet all you guys!

Darek (dasm) Smigiel


On Apr 25, 2016, at 12:13 PM, Doug Wiegley
 wrote:



On Apr 25, 2016, at 12:01 PM, Ihar Hrachyshka 
wrote:

WAT???

It was never supposed to be core only. Everyone is welcome!

+2

irony intended.

Socials are not controlled by gerrit ACLs.  :-)

doug


Sent from my iPhone


On 25 Apr 2016, at 11:56, Edgar Magana 
wrote:

Would you extend it to ex-cores?

Edgar





On 4/25/16, 10:55 AM, "Kyle Mestery"  wrote:

Ihar, Henry and I were talking and we thought Thursday night makes
sense for a Neutron social in Austin. If others agree, reply on this thread
and we'll find a place.

Thanks!
Kyle

___
___ OpenStack Development Mailing List (not for usage
questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__ OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

_
_ OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
 OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Adding results to extension callbacks (ml2 port_update ext handling).

2015-07-14 Thread Robert Kukura
I haven't had a chance to review this patch in detail yet, but am 
wondering if this is being integrated with ML2 as an extension driver? 
If so, that should clearly address how dictionaries are extended and 
input values are validated. If not, why?


-Bob

On 7/13/15 10:50 PM, Miguel Angel Ajo wrote:

Inline reply (I've added to CC relevant people for ml2/plugin.py
port_update extension
handing -via git blame-) as they probably have an opinion here
(specially the last
two options).

Kevin Benton wrote:

This sounds like a lot of overlap with the dict extend functions. Why
weren't they adequate for the QoS use case?


Let me explain, I believe Mike exceeded the proposal with AFTER_READ,
that's not the plan,
even if we could do as he proposed,

AFTER_READ dict extension is just a temporary workaround until we have
a separate explicit
api @armax is working on. Making explicit that your service is going
to extend resources,
and handled that in an ordered way is a good thing.

Afterwards, the source of this review has came from ML2 implementation of
AFTER_CREATE/AFTER_UPDATE notification for ports/nets.

Let me explain:

 As a decoupled, "mixinless" service extending core resources, we
need to do two things:

1) Extending the core resources as other extensions would do, adding
stuff to the port/network
dicts, here's where it comes the current AFTER_READ dict extension,
and future API making
that more explicit and more controlled.

2) Verifying the extended values we receive on core resources, by
registering to BEFORE_*
callbacks. For example, if a tenant is trying to use a qos_profile_id
he doesn't have access to,
or that doesn't exist, we can cancel the operation by throwing an
exception.

  We need to extend the notifications for ports and networks, as
that's not notified currently.
Mike will work on that too in a separate patch.


3) Taking the finally extended values and store associations in database
 (AFTER_UPDATE/AFTER_CREATE) so any later reads of the
port/network will get the right
 qos_profile_later in "after read".


We have found that AFTER_CREATE/AFTER_UPDATE happens at plugin level
(neutron/plugins/ml2/plugin.py / update_port) and that information
passed down is
very brief to our case (basically a "None" port if no ml2-know
attribute is changed), and
ml2 has to be aware of every single extension.

Here there are two options:
   a) we make ml2 also aware of qos_profile_id changes, complicating
the business logic down
there, or...

   b) we send the AFTER_CREATE/UPDATE events, and we listen to the
callback
listeners to such notification, and they will tell us if there's any
relevant field which must
be propagated down to agents. Then it's up to the agents to use such
field or not.


   Mike's patch proposal is in the direction of (b), he's a long term
thinker, I was proposing him
to just go (a) for now, but let's discuss and find what's more right.



On Mon, Jul 13, 2015 at 7:58 AM, Mike Kolesnik
wrote:


Hi,

I sent a simple patch to check the possibility to add results to
callbacks:
https://review.openstack.org/#/c/201127/

This will allow us to decouple the callback logic from the ML2
plugin in
the QoS scenario where we need to update the agents in case the
profile_id
on a port/network changes.
It will also allow for a cleaner way to extend resource attributes as
AFTER_READ callbacks can return a dict of fields to add to the original
resource instead of mutating it directly.

Please let me know what you think of this idea.

Regards,
Mike


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][qos][ml2] extensions swallow exceptions

2015-08-04 Thread Robert Kukura
The process_[create|update]_() extension driver methods are 
intended to validate user input. Exceptions raised by these need to be 
returned to users so they know what they did wrong. These exceptions 
should not be logged as anything higher than info level, since user 
errors generally are not of concern to admins. Also, given that these 
methods are called within the create or update transaction, raising an 
exception will cause the entire transaction to be rolled back. Once an 
exceptions occurs in an extension driver, there is no point in 
continuing to call the other extension drivers since only one exception 
will be returned to the user at a time.


-Bob

On 8/4/15 2:26 PM, Abhishek Raut wrote:

There is this review[1] trying to solve exactly what you¹re asking for.

I think it makes sense for the exceptions to be propagated all the way
back to the user instead of swallowing them and then roll back the
transaction. Does it even make sense to continue after a failure?

[1] https://review.openstack.org/#/c/202061/

‹Abhishek Raut

On 8/4/15, 3:02 AM, "Ihar Hrachyshka"  wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi all,

in feature/qos, we use ml2 extension drivers to handle additional
qos_policy_id field that can be provided thru API:

http://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2
/extensions/qos.py?h=feature/qos

What we do in qos extension is we create a database 'binding' object
between the updated port and the QoS policy that corresponds to
qos_policy_id. So we access the database. It means there may be some
complications there, f.e. the policy object is not available for the
tenant, or just does not exist. In that case, we raise an exception

>from the extension, assuming that ml2 will propagate it to the user in

some form.

But it does not work. This is because _call_on_ext_drivers swallows
exceptions:

http://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2
/managers.py#n766

It makes me ask some questions:

- - first, do we use extensions as was expected? Can we extend
extensions to cover our use case?

- - second, what would be the right way to go assuming we want to
support the case? Should we just reraise? Or maybe postpone till all
extension drivers are called, and then propagate an exception top into
the stack? (Probably some extension manager specific exception?) Or
maybe we want extensions to claim whether they may raise, and handle
them accordingly?

- - alternatively, if we abuse the API and should stop doing it, which
other options do we have to achieve similar behaviour without relying
on ml2 extensions AND without polluting ml2 driver with qos specific cod
e?

Thanks for your answers,
Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJVwI29AAoJEC5aWaUY1u57yLYH/jhYmu4aR+ewZwSzDYXMcfdz
tD5BSYKD/YmDMIAYprmVCqOlk1jaioesFPMUOrsycpacZZWjg5tDSrpJ2Iz5/ZPw
BYLIPGaYF3Pu87LHrUKhIz4f2TfSWve/7GBCZ6AK6zVqCXky8A9MRfWrf774a8oF
kexP7qQVbyrOcXxZANDa1bJuLDsb4TiTcuuDizPtuUWlMfzmtZeauyieji/g1smq
HBO5h7zUFQ87YvBqq7ed2KhlRENxo26aSrpxTFkyyxJU9xH1J8q9W1gWO7Tw1uCV
psaijDmlxU/KySR97Ro8m5teu+7Pcb2cg/s57WaHWuAvPNW1CmfYc/XDn2I9KlI=
=Fo++
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Mechanism drivers and Neutron server forking?

2015-06-08 Thread Robert Kukura
From a driver's perspective, it would be simpler, and I think 
sufficient, to change ML2 to call initialize() on drivers after the 
forking, rather than requiring drivers to know about forking.


-Bob

On 6/8/15 2:59 PM, Armando M. wrote:

Interestingly, [1] was filed a few moments ago:

[1] https://bugs.launchpad.net/neutron/+bug/1463129

On 2 June 2015 at 22:48, Salvatore Orlando > wrote:


I'm not sure if you can test this behaviour on your own because it
requires the VMware plugin and the eventlet handling of backend
response.

But the issue was manifesting and had to be fixed with this
mega-hack [1]. The issue was not about several workers executing
the same code - the loopingcall was always started on a single
thread. The issue I witnessed was that the other API workers just
hang.

There's probably something we need to understand about how
eventlet can work safely with a os.fork (I just think they're not
really made to work together!).
Regardless, I did not spent too much time on it, because I thought
that the multiple workers code might have been rewritten anyway by
the pecan switch activities you're doing.

Salvatore


[1] https://review.openstack.org/#/c/180145/

On 3 June 2015 at 02:20, Kevin Benton mailto:blak...@gmail.com>> wrote:

Sorry about the long delay.

>Even the LOG.error("KEVIN PID=%s network response: %s" %
(os.getpid(), r.text)) line?  Surely the server would have
forked before that line was executed - so what could prevent
it from executing once in each forked process, and hence
generating multiple logs?

Yes, just once. I wasn't able to reproduce the behavior you
ran into. Maybe eventlet has some protection for this? Can you
provide small sample code for the logging driver that does
reproduce the issue?

On Wed, May 13, 2015 at 5:19 AM, Neil Jerram
mailto:neil.jer...@metaswitch.com>> wrote:

Hi Kevin,

Thanks for your response...

On 08/05/15 08:43, Kevin Benton wrote:

I'm not sure I understand the behavior you are seeing.
When your
mechanism driver gets initialized and kicks off
processing, all of that
should be happening in the parent PID. I don't know
why your child
processes start executing code that wasn't invoked.
Can you provide a
pointer to the code or give a sample that reproduces
the issue?


https://github.com/Metaswitch/calico/tree/master/calico/openstack

Basically, our driver's initialize method immediately
kicks off a green thread to audit what is now in the
Neutron DB, and to ensure that the other Calico components
are consistent with that.

I modified the linuxbridge mech driver to try to
reproduce it:
http://paste.openstack.org/show/216859/

In the output, I never received any of the init code
output I added more
than once, including the function spawned using eventlet.


Interesting.  Even the LOG.error("KEVIN PID=%s network
response: %s" % (os.getpid(), r.text)) line?  Surely the
server would have forked before that line was executed -
so what could prevent it from executing once in each
forked process, and hence generating multiple logs?

Thanks,
Neil

The only time I ever saw anything executed by a child
process was actual
API requests (e.g. the create_port method).




On Thu, May 7, 2015 at 6:08 AM, Neil Jerram
mailto:neil.jer...@metaswitch.com>
>> wrote:

Is there a design for how ML2 mechanism drivers
are supposed to cope
with the Neutron server forking?

What I'm currently seeing, with api_workers = 2, is:

- my mechanism driver gets instantiated and
initialized, and
immediately kicks off some processing that
involves communicating
over the network

- the Neutron server process then forks into
multiple copies

- multiple copies of my driver's network
processing then continue,
and interfere badly with each other :-)

I think what I should do is:

- wait until any forking has happened

- then decide (somehow) which mechanism driver is
 

Re: [openstack-dev] [GBP][Heat] Group-based Policy plug-in for Heat

2015-06-18 Thread Robert Kukura
Zane, I will fix this (and the other GBP packages) as soon as I possibly 
can. I agree it would be better to move the GBP heat support into the 
heat repo, either main tree or /contrib, but others on the GBP team 
would most likely handle that. Seem reasonable for liberty.


Thanks,

-Bob

On 6/17/15 8:56 AM, Zane Bitter wrote:
Every day I get an email about broken dependencies for the group-based 
policy Heat plugin in Fedora (openstack-heat-gbp), because the version 
of Heat it depends on is capped. Two points:


1) Would whoever maintains that package (Bob?) please fix it :)

2) Why was this plugin not submitted to /contrib in the Heat repo so 
that it could be continuously tested & maintained against the Heat 
code base, thus avoiding any compatibility problems with future 
versions? This is the entire reason we have /contrib. In fact, if you 
think the resources are in close to their final form (i.e. the API is 
not super experimental still), I think we may even be willing to 
accept them into the main tree at this point.


cheers,
Zane.

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] - generic port binding for ml2 and dvr

2015-06-19 Thread Robert Kukura

Hi Swami,

My fix for 1367391 will eventually eliminate the special DVRPortBinding 
table, so would presumably eliminate this bug, but you are correct that 
the current WIP patch (https://review.openstack.org/#/c/189410/)  isn't 
quite that far along yet. I'll look for an interim fix.


-Bob

On 6/19/15 2:53 PM, Vasudevan, Swaminathan (PNB Roseville) wrote:


Hi Rob Kukura,

We are seeing an issue in the ML2 device_context that was modified by 
you in a recent patch to get everything from the PortContext instead 
of the DVR context.


Right now the PortContext does not seem to return the “DVRbinding” 
status, but just returns the PortContext and it errors out, since the 
PortContext does not have the attribute “status”.


https://bugs.launchpad.net/neutron/+bug/1465434

The error is seen when you try to “update” a distributed virtual 
router interface.


It errors out in “port_update_postcommit”.

You have mentioned in your “TODO” comments that it would be addressed 
by the bug 1367391, but I still see a patch in WIP waiting for review.


Can you take a look at this bug please.

Thanks.

Swaminathan Vasudevan

Systems Software Engineer (TC)

HP Networking

Hewlett-Packard

8000 Foothills Blvd

M/S 5541

Roseville, CA - 95747

tel: 916.785.0937

fax: 916.785.1815

email: swaminathan.vasude...@hp.com 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ml2][Manila] API to query segments used during port binding

2016-02-29 Thread Robert Kukura
Is Manila actually connecting (i.e. binding) something it controls to a 
Neutron port, similar to how a Neutron L3 or DHCP agent connects a 
network namespace to a port? Or does it just need to know the details 
about a port bound for a VM (or a service)?


If the former, it should probably be using something similar to 
Neutron's interface drivers (or maybe Nova's new VIF library) so it can 
work with arbitrary core plugins or ML2 mechanism drivers, and any 
corresponding L2 agent. If it absolutely requires a VLAN on a node's 
physical network, then Kevin's idea of a Manila-specific mechanism 
driver that does the binding (without involving an L2 agent) may be the 
way to go.


-Bob

On 2/29/16 4:38 PM, Kevin Benton wrote:
You're correct. Right now there is no way via the HTTP API to find 
which segments a port is bound to.
This is something we can certainly consider adding, but it will need 
an RFE so it wouldn't land until Newton at the earliest.


Have you considered writing an ML2 driver that just notifies Manilla 
of the port's segment info? All of this information is available to 
ML2 drivers in the PortContext object that is passed to them.


On Mon, Feb 29, 2016 at 6:48 AM, Ihar Hrachyshka > wrote:


Fixed neutron tag in the subject.

Marc mailto:m...@koderer.com>> wrote:

Hi Neutron team,

I am currently working on a feature for hierarchical port
binding support in
Manila [1] [2]. Just to give some context: In the current
implementation Manila
creates a neutron port but let it unbound (state DOWN).
Therefore Manila uses
the port create only retrieve an IP address and segmentation
ID (some drivers
only support VLAN here).

My idea is to change this behavior and do an actual port
binding action so that
the configuration of VLAN isn’t a manual job any longer. And
that multi-segment
and HPB is supported on the long-run.

My current issue is: How can Manila retrieve the segment
information for a
bound port? Manila only is interested in the last (bottom)
segmentation ID
since I assume the storage is connected to a ToR switch.

Database-wise it’s possible to query it using
ml2_port_binding_levels table.
But AFAIK there is no API to query this. The only information
that is exposed
are all segments of a network. But this is not sufficient to
identify which
segments actually used for a port binding.

Regards
Marc
SAP SE

[1]:
https://wiki.openstack.org/wiki/Manila/design/manila-newton-hpb-support
[2]: https://review.openstack.org/#/c/277731/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-24 Thread Robert Kukura
Kyle, What happened to the long-term potential goal of ML2 driver APIs 
becoming neutron's core APIs? Do we really want to encourage new 
monolithic plugins?


ML2 is not a control plane - its really just an integration point for 
control planes. Although co-existence of multiple mechanism drivers is 
possible, and sometimes very useful, the single-driver case is fully 
supported. Even with hierarchical bindings, its not really ML2 that 
controls what happens - its the drivers within the framework. I don't 
think ML2 really limits what drivers can do, as long as a virtual 
network can be described as a set of static and possibly dynamic network 
segments. ML2 is intended to impose as few constraints on drivers as 
possible.


My recommendation would be to implement an ML2 mechanism driver for OVN, 
along with any needed new type drivers or extension drivers. I believe 
this will result in a lot less new code to write and maintain.


Also, keep in mind that even if multiple driver co-existence doesn't 
sound immediately useful, there are several potential use cases to 
consider. One is that it allows new technology to be introduced into an 
existing cloud alongside what previously existed. Migration from one ML2 
driver to another may be a lot simpler (and/or flexible) than migration 
from one plugin to another. Another is that additional drivers can 
support special cases, such as bare metal, appliances, etc..


-Bob

On 2/24/15 11:11 AM, Kyle Mestery wrote:
On Tue, Feb 24, 2015 at 3:19 AM, Salvatore Orlando 
mailto:sorla...@nicira.com>> wrote:


On 24 February 2015 at 01:34, Kyle Mestery mailto:mest...@mestery.com>> wrote:

Russel and I have already merged the initial ML2 skeleton
driver [1].

The thinking is that we can always revert to a non-ML2 driver
if needed.


If nothing else an authoritative decision on a design direction
saves us the hassle of going through iterations and discussions.
The integration through ML2 is definitely viable. My opinion
however is that since OVN implements a full control plane, the
control plane bits provided by ML2 are not necessary, and a plugin
which provides only management layer capabilities might be the
best solution. Note: this does not mean it has to be monolithic.
We can still do L3 with a service plugin.
However, since the same kind of approach has been adopted for ODL
I guess this provides some sort of validation.

To be honest, after thinking about this last night, I'm now leaning 
towards doing this as a full plugin. I don't really envision OVN 
running with other plugins, as OVN is implementing it's own control 
plane, as you say. So the value of using ML2 is quesitonable.


I'm not sure how useful having using OVN with other drivers
will be, and that was my initial concern with doing ML2 vs.
full plugin. With the HW VTEP support in OVN+OVS, you can tie
in physical devices this way. Anyways, this is where we're at
for now. Comments welcome, of course.


That was also kind of my point regarding the control plane bits
provided by ML2 which OVN does not need. Still, the fact that we
do not use a function does not make any harm.
Also i'm not sure if OVN needs at all a type manager. If not, we
can always implement a no-op type manager, I guess.

See above. I'd like to propose we move OVN to a full plugin instead of 
an ML2 MechanismDriver.


Kyle

Salvatore


Thanks,
Kyle

[1] https://github.com/stackforge/networking-ovn

On Mon, Feb 23, 2015 at 4:09 PM, Kevin Benton
mailto:blak...@gmail.com>> wrote:

I want to emphasize Salvatore's last two points a bit
more. If you go with a monolithic plugin, you eliminate
the possibility of heterogenous deployments.

One example of this that is common now is having the
current OVS driver responsible for setting up the vswitch
and then having a ToR driver (e.g. Big Switch, Arista,
etc) responsible for setting up the fabric. Additionally,
there is a separate L3 plugin (e.g. the reference one,
Vyatta, etc) for providing routing.

I suppose with an overlay it's easier to take the route
that you don't want to be compatible with other networking
stuff at the Neutron layer (e.g. integration with the 3rd
parties is orchestrated somewhere else). In that case, the
above scenario wouldn't make much sense to support, but
it's worth keeping in mind.

On Mon, Feb 23, 2015 at 10:28 AM, Salvatore Orlando
mailto:sorla...@nicira.com>> wrote:

I think there are a few factors which influence the
ML2 driver vs "monolithic" plugin debate, and they
mostly depend on OVN rather than Neutron.
From a Neutron pers

Re: [openstack-dev] [Group-Based-Policy] Fixing backward incompatible unnamed constraints removal

2015-04-15 Thread Robert Kukura
I believe that, on the stable branch at least, we need to fix the 
migrations so that upgrades are possible. This probably means fixing 
them the same way on the master branch first and backporting the fixes 
to stable/juno. All migrations that were present in the initial juno 
release need to be restored to the exact state they were in that 
release, and new migrations need to be added that make the needed schema 
changes, preserving state of existing deployments. I'm assuming there is 
more involved than just the constraint removal in Ivar's [2], but 
haven't checked yet. I think it would be OK to splice these new 
migrations into the chain on master just after the final migration that 
was present in the juno release, since we are not trying to support 
trunk chasers on master. Does this make sense? I do not think it should 
be difficult, unless schema changes were introduced for which deployment 
state cannot be preserved/defaulted.


-Bob

On 4/15/15 3:30 AM, Sumit Naiksatam wrote:

Thanks Ivar for tracking this and bringing it up for discussion. I am
good with taking approach (1).



On Mon, Apr 13, 2015 at 1:10 PM, Ivar Lazzaro  wrote:

Hello Team,

As per discussion in the latest GBP meeting [0] I'm hunting down all the
backward incompatible changes made on DB migrations regarding the removal of
unnamed constraints.
In this report [1] you can find the list of affected commits.

The problem is that some of the affected commits are already back ported to
Juno! and others will be [2], so I was wondering what's the plan regarding
how we want back port the compatibility fix to stable/juno.
I see two possibilities:

1) We backport [2] as is (with the broken migration), but we cut the new
stable release only once [3] is merged and back ported. This has the
advantage of having a cleaner backport tree in which all the changes in
master are cherry-picked without major changes.

2) We split [3] in multiple patches, and we only backport those that fix
commits that are already in Juno. Patches like [2] will be changed to
accomodate the fixed migration *before* being merged into the stable branch.
This will avoid intra-release code breakage (which is an issue for people
installing GBP directly from code).

Please share your thoughts, Thanks,
Ivar.

[0]
http://eavesdrop.openstack.org/meetings/networking_policy/2015/networking_policy.2015-04-09-18.00.log.txt
[1] https://bugs.launchpad.net/group-based-policy/+bug/1443606
[2] https://review.openstack.org/#/c/170972/
[3] https://review.openstack.org/#/c/173051/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] - driver capability/feature verification framework

2017-03-13 Thread Robert Kukura

Hi Kevin,

I will file the RFE this week.

-Bob


On 3/13/17 2:05 PM, Kevin Benton wrote:

Hi,

At the PTG we briefly discussed a generic system for verifying that 
the appropriate drivers are enforcing a particular user-requested 
feature in ML2 (e.g. security groups, qos, etc).


Is someone planning on working on this for Pike? If so, can you please 
file an RFE so we can prioritize it appropriately? We have to decide 
if we are going to block features based on the enforcement by this 
framework.


Cheers,
Kevin Benton


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] - driver capability/feature verification framework

2017-03-15 Thread Robert Kukura

RFE is at https://bugs.launchpad.net/neutron/+bug/1673142.

-Bob


On 3/13/17 2:37 PM, Robert Kukura wrote:


Hi Kevin,

I will file the RFE this week.

-Bob


On 3/13/17 2:05 PM, Kevin Benton wrote:

Hi,

At the PTG we briefly discussed a generic system for verifying that 
the appropriate drivers are enforcing a particular user-requested 
feature in ML2 (e.g. security groups, qos, etc).


Is someone planning on working on this for Pike? If so, can you 
please file an RFE so we can prioritize it appropriately? We have to 
decide if we are going to block features based on the enforcement by 
this framework.


Cheers,
Kevin Benton


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Neutron team social in Atlanta on Thursday

2017-02-17 Thread Robert Kukura

+1


On 2/17/17 2:18 PM, Kevin Benton wrote:

Hi all,

I'm organizing a Neutron social event for Thursday evening in Atlanta 
somewhere near the venue for dinner/drinks. If you're interested, 
please reply to this email with a "+1" so I can get a general count 
for a reservation.


Cheers,
Kevin Benton


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Why ML2 does not supprot update network provider attributes

2016-11-07 Thread Robert Kukura
I'm not sure unbinding ports is necessary unless the segment that a port 
has bound is removed or modified. A reasonable compromise might be for 
ML2 to allow adding new segments, which should not require 
unbinding/rebinding ports, but not allow removing or modifying existing 
segments.


-Bob


On 11/5/16 7:59 PM, Kevin Benton wrote:
To allow that we would have to unbind every port in the network and 
then rebind it with the new segment info generated from changing the 
provider attributes. This requires quite a bit of orchestration code 
in ML2 (to handle failures, etc) that doesn't exist right now.


On Sat, Nov 5, 2016 at 7:50 AM, zhuna > wrote:


Dear,

Neutron RESTful API support update network provider attributes,
but ML2 raises error when update network provider attributes (in
function _raise_if_updates_provider_attributes), can anyone tell
me why ML2 does not support it?

BR

Juno


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev