Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-08 Thread Armando M.
On 8 August 2014 10:56, Kevin Benton blak...@gmail.com wrote:

 There is an enforcement component to the group policy that allows you to
 use the current APIs and it's the reason that group policy is integrated
 into the neutron project. If someone uses the current APIs, the group
 policy plugin will make sure they don't violate any policy constraints
 before passing the request into the regular core/service plugins.


This is the statement that makes me trip over, and I don't understand why
GBP and Neutron Core need to be 'integrated' together as they have. Policy
decision points can be decentralized from the system under scrutiny, we
don't need to have one giant monolithic system that does everything; it's
an architectural decision that would make difficult to achieve
composability and all the other good -ilities of software systems.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-08 Thread Armando M.

 Adding the GBP extension to Neutron does not change the nature of the
 software architecture of Neutron making it more or less monolithic.


I agree with this statement...partially: the way GBP was developed is in
accordance to the same principles and architectural choices made for the
service plugins and frameworks we have right now, and yes it does not make
Neutron more monolithic but certainly not less. These same very principles
have unveiled limitations we have realized need to be addressed, according
to Neutron's busy agenda. That said, if I were to be given the opportunity
to revise some architectural decisions during the new groundbreaking work
(regardless of the nature), I would.

For instance, I hate that the service plugins live in the same address
space of Neutron Server, I hate that I have one Neutron Server that does
L2, L3, IPAM, ...; we could break it down and make sure every entity can
have its own lifecycle: we can compose and integrate more easily if we did.
Isn't that what years of middleware and distributed systems taught us?

I suggested in the past that GBP would best integrate to Neutron via a
stable and RESTful interface, just like any other OpenStack project does. I
have been unable to be convinced otherwise, and I would love to be able to
change my opinion.


 It
 fulfills a gap that is currently present in the Neutron API, namely -
 to complement the current imperative abstractions with a app
 -developer/deployer friendly declarative abstraction [1]. To
 reiterate, it has been proposed as an “extension”, and not a
 replacement of the core abstractions or the way those are consumed.

If
 this is understood and interpreted correctly, I doubt that there
 should be reason for concern.


I never said that GBP did (mean to replace the core abstractions): I am
talking purely architecture and system integration. Not sure if this
statement is directed to my comment.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-08 Thread Armando M.
On 8 August 2014 14:55, Kevin Benton blak...@gmail.com wrote:

 This is the statement that makes me trip over,

 I don't know what that means. Does it mean that you are so incredibly
 shocked by the stupidity of that statement that you fall down? Or does it
 mean something else?


Why would you think that? I trip over the obstacle that prevents me from
understanding! If at all, I would blame my stupidity, not the one of the
statement :)



 Policy decision points can be decentralized from the system under
 scrutiny,

 Unfortunately they can't in this case where some policy needs to be
 enforced between plugins. If we could refactor the communication between
 service and core plugins to use the API as well, then we probably could
 build this as a middleware.


Assumed I agreed they couldn't, which I find hard to believe, instead of
going after the better approach, we stick with the less optimal one?



 On Fri, Aug 8, 2014 at 1:45 PM, Armando M. arma...@gmail.com wrote:

 On 8 August 2014 10:56, Kevin Benton blak...@gmail.com wrote:

 There is an enforcement component to the group policy that allows you to
 use the current APIs and it's the reason that group policy is integrated
 into the neutron project. If someone uses the current APIs, the group
 policy plugin will make sure they don't violate any policy constraints
 before passing the request into the regular core/service plugins.


 This is the statement that makes me trip over, and I don't understand why
 GBP and Neutron Core need to be 'integrated' together as they have. Policy
 decision points can be decentralized from the system under scrutiny, we
 don't need to have one giant monolithic system that does everything; it's
 an architectural decision that would make difficult to achieve
 composability and all the other good -ilities of software systems.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-08 Thread Armando M.
  One advantage of the service plugin is that one can leverage the neutron
 common framework such as Keystone authentication where common scoping is
 done. It would be important in the policy type of framework to have such
 scoping


The framework you're referring to is common and already reusable, it's not
a prerogative of Neutron.



 While the service plugin has scalability issues as pointed above that it
 resides in neutron server, it is however stable and user configurable and a
 lot of common code is executed for networking services.


This is what static or dynamic libraries are for and reused for; I can have
a building block and reuse it many times the way I see fit keeping my
components' lifecycles separate.


 So while we make the next generation services framework more distributed
 and scalable, it is ok to do it under the current framework especially
 since it has provision for the user to opt in when needed.


A next generation services framework is not a prerequisite to integrating
two OpenStack projects via REST APIs. I don't see how we would associate
the two concepts together.






 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-08 Thread Armando M.


 On Fri, Aug 8, 2014 at 5:38 PM, Armando M. arma...@gmail.com wrote:



   One advantage of the service plugin is that one can leverage the
 neutron common framework such as Keystone authentication where common
 scoping is done. It would be important in the policy type of framework to
 have such scoping


 The framework you're referring to is common and already reusable, it's
 not a prerogative of Neutron.


  Are you suggesting that Service Plugins, L3, IPAM etc become individual
 endpoints, resulting in redundant authentication round-trips for each of
 the components.

 Wouldn't this result in degraded performance and potential consistency
 issues?


The endpoint - in the OpenStack lingo - that exposes the API abstractions
(concepts and operations) can be, logically and physically, different from
the worker that implements these abstractions; authentication is orthogonal
to this and I am not suggesting what you mention.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Is network ordering of vNICs guaranteed?

2014-08-09 Thread Armando M.
On 9 August 2014 10:16, Jay Pipes jaypi...@gmail.com wrote:

 Paul, does this friend of a friend have a reproduceable test script for
 this?

 Thanks!
 -jay


We would also need to know the OpenStack release where this issue manifest
itself. A number of bugs have been raised in the past around this type of
issue, and the last fix I recall is this one:

https://bugs.launchpad.net/nova/+bug/1300325

It's possible that this might have regressed, though.

Cheers,
Armando
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Which changes need accompanying bugs?

2014-08-13 Thread Armando M.
I am gonna add more color to this story by posting my replies on review [1]:

Hi Angus,

You touched on a number of points. Let me try to give you an answer to all
of them.

 (I'll create a bug report too. I still haven't worked out which class of
changes need an accompanying bug report and which don't.)

The long story can be read below:

https://wiki.openstack.org/wiki/BugFilingRecommendations

https://wiki.openstack.org/wiki/GitCommitMessages

IMO, there's a grey area for some of the issues you found, but when I am
faced with a bug, I tend to answer myself? Would a bug report be useful to
someone else? The author of the code? A consumer of the code? Not everyone
follow the core review system all the time, whereas Launchpad is pretty
much the tool everyone uses to stay abreast with the OpenStack release
cycle. Obviously if you're fixing a grammar nit, or filing a cosmetic
change that has no functional impact then I warrant the lack of a test, but
in this case you're fixing a genuine error: let's say we want to backport
this to icehouse, how else would we make the release manager of that?
He/she is looking at Launchpad.

 I can add a unittest for this particular code path, but it would only
check this particular short segment of code, would need to be maintained as
the code changes, and wouldn't catch another occurrence somewhere else.
This seems an unsatisfying return on the additional work :(

You're looking at this from the wrong perspective. This is not about
ensuring that other code paths are valid, but that this code path stays
valid over time, ensuring that the code path is exercised and that no other
regression of any kind creep in. The reason why this error was introduced
in the first place is because the code wasn't tested when it should have.
If you don't get that this mechanical effort of fixing errors by static
analysis is kind of ineffective, which leads me to the last point

 I actually found this via static analysis using pylint - and my question
is: should I create some sort of pylint unittest that tries to catch this
class of problem across the entire codebase? [...]

I value what you're doing, however I would see even more value if we
prevented these types of errors from occurring in the first place via
automation. You run pylint today, but what about tomorrow, or a week from
now? Are you gonna be filing pylint fixes for ever? We might be better off
automating the check and catch these types of errors before they land in
the tree. This means that the work you are doing it two-pronged: a)
automate the detection of some failures by hooking this into tox.ini via
HACKING/pep8 or equivalent mechanism and b) file all the fixes that require
these validation tests to pass; c) everyone is happy, or at least they
should be.

I'd welcome to explore a better strategy to ensure a better quality of the
code base, without some degree of automation, nothing will stop these
conversation from happening again.

Cheers,

Armando

[1] https://review.openstack.org/#/c/113777/


On 13 August 2014 03:02, Ihar Hrachyshka ihrac...@redhat.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512

 On 13/08/14 09:28, Angus Lees wrote:
  I'm doing various small cleanup changes as I explore the neutron
  codebase. Some of these cleanups are to fix actual bugs discovered
  in the code.  Almost all of them are tiny and obviously correct.
 
  A recurring reviewer comment is that the change should have had an
   accompanying bug report and that they would rather that change was
  not submitted without one (or at least, they've -1'ed my change).
 
  I often didn't discover these issues by encountering an actual
  production issue so I'm unsure what to include in the bug report
  other than basically a copy of the change description.  I also
  haven't worked out the pattern yet of which changes should have a
  bug and which don't need one.
 
  There's a section describing blueprints in NeutronDevelopment but
  nothing on bugs.  It would be great if someone who understands the
  nuances here could add some words on when to file bugs: Which type
  of changes should have accompanying bug reports? What is the
  purpose of that bug, and what should it contain?
 

 It was discussed before at:
 http://lists.openstack.org/pipermail/openstack-dev/2014-May/035789.html

 /Ihar
 -BEGIN PGP SIGNATURE-
 Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

 iQEcBAEBCgAGBQJT6zfOAAoJEC5aWaUY1u570wQIAMpoXIK/p5invp+GW0aMMUK0
 C/MR6WIJ83e6e2tOVUrxheK6bncVvidOI4EWGW1xzP1sg9q+8Hs1TNyKHXhJAb+I
 c435MMHWsDwj6p1OeDxHnSOVMthcGH96sgRa1+CIk6+oktDF3IMmiOPRkxdpqWCZ
 7TkV75mryehrTNwAkVPfpWG3OhWO44d5lLnJFCIMCuOw2NHzyLIOoGQAlWNQpy4V
 a869s00WO37GEed6A5Zizc9K/05/6kpDIQVim37tw91JcZ69VelUlZ1THx+RTd33
 92r87APm3fC/LioKN3fq1UUo2c94Vzl3gYPFVl8ZateQNMKB7ONMBePOfWR9H1k=
 =wCJQ
 -END PGP SIGNATURE-

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] [neutron] Which changes need accompanying bugs?

2014-08-13 Thread Armando M.


  At the moment pylint on neutron is *very* noisy, and I've been looking
 through
  the reported issues by hand to get a feel for what's involved.  Enabling
  pylint is a separate discussion that I'd like to have - in some other
 thread.
 

 I think enabling pylint check was discussed at the start of the
 project, but for the reasons you mention, it was not considered.


Yes, noise can be a problem, but we should be able to adjust it to a level
we're comfortable with, at least for catching the dangerous violations.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Adding GateFailureFix tag to commit messages

2014-08-21 Thread Armando M.
Hi folks,

According to [1], we have ways to introduce external references to commit
messages.

These are useful to mark certain patches and their relevance in the context
of documentation, upgrades, etc.

I was wondering if it would be useful considering the addition of another
tag:

GateFailureFix

The objective of this tag, mainly for consumption by the review team, would
be to make sure that some patches get more attention than others, as they
affect the velocity of how certain critical issues are addressed (and gate
failures affect everyone).

As for machine consumption, I know that some projects use the
'gate-failure' tag to categorize LP bugs that affect the gate. The use of a
GateFailureFix tag in the commit message could make the tagging automatic,
so that we can keep a log of what all the gate failures are over time.

Not sure if this was proposed before, and I welcome any input on the matter.

Cheers,
Armando

[1] -
https://wiki.openstack.org/wiki/GitCommitMessages#Including_external_references
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Adding GateFailureFix tag to commit messages

2014-08-21 Thread Armando M.

 A concern with this approach is it's pretty arbitrary, and not always
 clear which bugs are being addressed and how severe they are.


Well, establishing whether LP reports are actual bugs and assigning the
severity isn't what triaging is for?



 An idea that came up in the Infra/QA meetup was to build a custom review
 dashboard based on the bug list in elastic recheck. That would also
 encourage people to categorize this bugs through that system, and I
 think provide a virtuous circle around identifying the issues at hand.


Having elastic recheck means that the bug has already being vetted, that a
fingerprint for the bug has been filed etc. Granted some gate failures may
be long lasting, but I was hoping this mechanism would target those
failures that are fixed fairly quickly.



 I think Joe Gordon had a first pass at this, but I'd be more interested
 in doing it this way because it means the patch author fixing a bug just
 needs to know they are fixing the bug. Whether or not it's currently a
 gate issue would be decided not by the commit message (static) but by
 our system that understands what are the gate issues *right now* (dynamic).


Gate failures are not exactly low-hanging fruits so it's likely that the
author of the patch already knows that he's fixing a severe issue. The tag
would be an alert for other reviewers so that they can give the patch more
attention. As a core reviewer, I welcome any proposal that wouldn't cause a
reviewer to switch across yet another dashboard, as we already have plenty
(but maybe that's just me).

Having said that, it sounds like you guys have already thought about this,
so it makes sense to discard this idea.

Thanks,
Armando


 -Sean

 --
 Sean Dague
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-10 Thread Armando M.
Hi,

I devoured this thread, so much it was interesting and full of
insights. It's not news that we've been pondering about this in the
Neutron project for the past and existing cycle or so.

Likely, this effort is going to take more than two cycles, and would
require a very focused team of people working closely together to
address this (most likely the core team members plus a few other folks
interested).

One question I was unable to get a clear answer was: what happens to
existing/new bug fixes and features? Would the codebase go in lockdown
mode, i.e. not accepting anything else that isn't specifically
targeting this objective? Just using NFV as an example, I can't
imagine having changes supporting NFV still being reviewed and merged
while this process takes place...it would be like shooting at a moving
target! If we did go into lockdown mode, what happens to all the
corporate-backed agendas that aim at delivering new value to
OpenStack?

Should we relax what goes into the stable branches, i.e. considering
having  a Juno on steroids six months from now that includes some of
the features/fixes that didn't land in time before this process kicks
off?

I like the end goal of having a leaner Nova (or Neutron for that
matter), it's the transition that scares me a bit!

Armando

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-11 Thread Armando M.
On 10 September 2014 22:23, Russell Bryant rbry...@redhat.com wrote:
 On 09/10/2014 10:35 PM, Armando M. wrote:
 Hi,

 I devoured this thread, so much it was interesting and full of
 insights. It's not news that we've been pondering about this in the
 Neutron project for the past and existing cycle or so.

 Likely, this effort is going to take more than two cycles, and would
 require a very focused team of people working closely together to
 address this (most likely the core team members plus a few other folks
 interested).

 One question I was unable to get a clear answer was: what happens to
 existing/new bug fixes and features? Would the codebase go in lockdown
 mode, i.e. not accepting anything else that isn't specifically
 targeting this objective? Just using NFV as an example, I can't
 imagine having changes supporting NFV still being reviewed and merged
 while this process takes place...it would be like shooting at a moving
 target! If we did go into lockdown mode, what happens to all the
 corporate-backed agendas that aim at delivering new value to
 OpenStack?

 Yes, I imagine a temporary slow-down on new feature development makes
 sense.  However, I don't think it has to be across the board.  Things
 should be considered case by case, like usual.

Aren't we trying to move away from the 'usual'? Considering things on
a case by case basis still requires review cycles, etc. Keeping the
status quo would mean prolonging the exact pain we're trying to
address.


 For example, a feature that requires invasive changes to the virt driver
 interface might have a harder time during this transition, but a more
 straight forward feature isolated to the internals of a driver might be
 fine to let through.  Like anything else, we have to weight cost/benefit.

 Should we relax what goes into the stable branches, i.e. considering
 having  a Juno on steroids six months from now that includes some of
 the features/fixes that didn't land in time before this process kicks
 off?

 No ... maybe I misunderstand the suggestion, but I definitely would not
 be in favor of a Juno branch with features that haven't landed in master.


I was thinking of the bold move of having Kilo (and beyond)
developments solely focused on this transition. Until this is
complete, nothing would be merged that is not directly pertaining this
objective. At the same time, we'd still want pending features/fixes
(and possibly new features) to land somewhere stable-ish. I fear that
doing so in master, while stuff is churned up and moved out into
external repos, will makes this whole task harder than it already is.

Thanks,
Armando

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] DVR Tunnel Design Question

2014-09-17 Thread Armando M.
VLAN is on the radar, vxlan/gre was done to start with.

I believe Vivek mentioned the rationale in some other thread. The gist
of it below:

In the current architecture, we use a unique DVR MAC per compute node
to forward DVR Routed traffic directly to destination compute node.
The DVR routed traffic from the source compute node will carry
'destination VMs underlay VLAN' in the frame, but the Source Mac in
that same frame will be the DVR Unique MAC. So, same DVR Unique Mac is
used for potentially a number of overlay network VMs that would exist
on that same source compute node.

The underlay infrastructure switches will see the same DVR Unique MAC
being associated with different VLANs on incoming frames, and so this
would result in VLAN Thrashing on the switches in the physical cloud
infrastructure. Since tunneling protocols carry the entire DVR routed
inner frames as tunnel payloads, there is no thrashing effect on
underlay switches.

There will still be thrashing effect on endpoints on CNs themselves,
when they try to learn that association between inner frame source MAC
and the TEP port on which the tunneled frame is received. But that we
have addressed in L2 Agent by having a 'DVR Learning Blocker' table,
which ensures that learning for DVR routed packets alone is
side-stepped.

As a result, VLAN was not promoted as a supported underlay for the
initial DVR architecture.

Cheers,
Armando

On 16 September 2014 20:35, 龚永生 gong...@unitedstack.com wrote:
 I think the VLAN should also be supported later.  The tunnel should not be
 the prerequisite for the DVR feature.


 -- Original --
 From:  Steve Wormleyopenst...@wormley.com;
 Date:  Wed, Sep 17, 2014 10:29 AM
 To:  openstack-devopenstack-dev@lists.openstack.org;
 Subject:  [openstack-dev] [neutron] DVR Tunnel Design Question

 In our environment using VXLAN/GRE would make it difficult to keep some of
 the features we currently offer our customers. So for a while now I've been
 looking at the DVR code, blueprints and Google drive docs and other than it
 being the way the code was written I can't find anything indicating why a
 Tunnel/Overlay network is required for DVR or what problem it was solving.

 Basically I'm just trying to see if I missed anything as I look into doing a
 VLAN/OVS implementation.

 Thanks,
 -Steve Wormley


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [infra] Python 2.6 tests can't possibly be passing in neutron

2014-09-22 Thread Armando M.
What about:

https://github.com/openstack/neutron/blob/master/test-requirements.txt#L12



On 22 September 2014 10:23, Kevin L. Mitchell
kevin.mitch...@rackspace.com wrote:
 My team just ran into an issue where neutron was not passing unit tests
 when run under Python 2.6.  We tracked this down to a test support
 function using collections.OrderedDict.  This was in locally forked
 code, but when I compared it to upstream code, I found that the code in
 upstream neutron is identical…meaning that upstream neutron cannot
 possibly be passing unit tests under Python 2.6.  Yet, somehow, the
 neutron reviews I've looked at are passing the Python 2.6 gate!  Any
 ideas as to how this could be happening?

 For the record, the problem is in neutron/tests/unit/test_api_v2.py:148,
 in the function _get_collection_kwargs(), which uses
 collections.OrderedDict.  As there's no reason to use OrderedDict here
 that I can see—there's no definite order on the initialization, and all
 consumers pass it to an assert_called_once_with() method with the '**'
 operator—I have proposed a review[1] to replace it with a simple dict.

 [1] https://review.openstack.org/#/c/123189/
 --
 Kevin L. Mitchell kevin.mitch...@rackspace.com
 Rackspace


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [infra] Python 2.6 tests can't possibly be passing in neutron

2014-09-22 Thread Armando M.
I suspect that the very reason underlying the existence of this thread
is that some users out there are not quite ready to pull the plug on
Python 2.6.

Any decision about stopping the support of Python 2.6 should not be
taken solely on making the developer's life easier, but maybe I am
stating the obvious.

Thanks,
Armando

On 22 September 2014 11:39, Solly Ross sr...@redhat.com wrote:
 I'm in favor of killing Python 2.6 with fire.
 Honestly, I think it's hurting code readability and productivity --

 You have to constantly think about whether or not some feature that
 the rest of the universe is already using is supported in Python 2.6
 whenever you write code.

 As for readability, things like 'contextlib.nested' can go away if we can
 kill Python 2.6 (Python 2.7 supports nested context managers OOTB, in a much
 more readable way).

 Best Regards,
 Solly

 - Original Message -
 From: Joshua Harlow harlo...@outlook.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Monday, September 22, 2014 2:33:16 PM
 Subject: Re: [openstack-dev] [neutron] [infra] Python 2.6 tests can't 
 possibly be passing in neutron

 Just as an update to what exactly is RHEL python 2.6...

 This is the expanded source rpm:

 http://paste.ubuntu.com/8405074/

 The main one here appears to be:

 - python-2.6.6-ordereddict-backport.patch

 Full changelog @ http://paste.ubuntu.com/8405082/

 Overall I'd personally like to get rid of python 2.6, and move on, but then
 I'd also like to get rid of 2.7 and move on also ;)

 - Josh

 On Sep 22, 2014, at 11:17 AM, Monty Taylor mord...@inaugust.com wrote:

  On 09/22/2014 10:58 AM, Kevin L. Mitchell wrote:
  On Mon, 2014-09-22 at 10:32 -0700, Armando M. wrote:
  What about:
 
  https://github.com/openstack/neutron/blob/master/test-requirements.txt#L12
 
  Pulling in ordereddict doesn't do anything if your code doesn't use it
  when OrderedDict isn't in collections, which is the case here.  Further,
  there's no reason that _get_collection_kwargs() needs to use an
  OrderedDict: it's initialized in an arbitrary order (generator
  comprehension over a set), then later passed to functions with **, which
  converts it to a plain old dict.
 
 
  So - as an update to this, this is due to RedHat once again choosing to
  backport features from 2.7 into a thing they have labeled 2.6.
 
  We test 2.6 on Centos6 - which means we get RedHat's patched version of
  Python2.6 - which, it turns out, isn't really 2.6 - so while you might
  want to assume that we're testing 2.6 - we're not - we're testing
  2.6-as-it-appears-in-RHEL.
 
  This brings up a question - in what direction do we care/what's the
  point in the first place?
 
  Some points to ponder:
 
  - 2.6 is end of life - so the fact that this is coming up is silly, we
  should have stopped caring about it in OpenStack 2 years ago at least
  - Maybe we ACTUALLY only care about 2.6-on-RHEL - since that was the
  point of supporting it at all
  - Maybe we ACTUALLY care about 2.6 support across the board, in which
  case we should STOP testing using Centos6 which is not actually 2.6
 
  I vote for just amending our policy right now and killing 2.6 with
  prejudice.
 
  (also, I have heard a rumor that there are people running in to problems
  due to the fact that they are deploying onto a two-release-old version
  of Debian. No offense - but there is no way we're supporting that)
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Testr] Brand new checkout of Neutron... getting insane unit test run results

2014-01-02 Thread Armando M.
To be fair, neutron cores turned down reviews [1][2][3] for fear that
the patch would break Hyper-V support for Neutron.

Whether it's been hinted (erroneously) that this was a packaging issue
is irrelevant for the sake of this discussion, and I suggested (when I
turned down review [3]) if we could make the requirement dependent on
the distro, so that the problem could be solved once and for all (and
without causing any side effects).

Just adding the pyudev dependency to requirements.txt it's not
acceptable for the above mentioned reason. I am sorry if people keep
abandoning the issue without taking the bull by the horns.

[1] https://review.openstack.org/#/c/64333/
[2] https://review.openstack.org/#/c/55966/
[3] https://review.openstack.org/#/c/58884/

Cheers,
Armando

On 2 January 2014 18:03, Jay Pipes jaypi...@gmail.com wrote:
 On 01/01/2014 10:56 PM, Clark Boylan wrote:

 On Wed, Jan 1, 2014 at 7:33 PM, 黎林果 lilinguo8...@gmail.com wrote:

 I have met this problem too.The units can't be run.
 The end info as:

 Ran 0 tests in 0.673s

 OK
 cp: cannot stat `.testrepository/-1': No such file or directory

 2013/12/28 Jay Pipes jaypi...@gmail.com:

 On 12/27/2013 11:11 PM, Robert Collins wrote:


 I'm really sorry about the horrid UI - we're in the middle of fixing
 the plumbing to report this and support things like tempest better -
 from the bottom up. The subunit listing - testr reporting of listing
 errors is fixed on the subunit side, but not on the the testr side
 yet.

 If you look at the end of the error:

 \rimport

 errors4neutron.tests.unit.linuxbridge.test_lb_neutron_agent\x85\xc5\x1a\\',
 stderr=None
 error: testr failed (3)

 You can see this^

 which translates as
 import errors
 neutron.tests.unit.linuxbridge.test_lb_neutron_agent

 so

 neutron/tests/unit/linuxbridge/test_lb_neutron_agent.py

 is failing to import.



 Phew, thanks Rob! I was a bit stumped there :) I have identified the
 import
 issue (this is on a fresh checkout of Neutron, BTW, so I'm a little
 confused
 how this made it through the gate...

 (.venv)jpipes@uberbox:~/repos/openstack/neutron$ python
 Python 2.7.4 (default, Sep 26 2013, 03:20:26)
 [GCC 4.7.3] on linux2
 Type help, copyright, credits or license for more information.

 import neutron.tests.unit.linuxbridge.test_lb_neutron_agent

 Traceback (most recent call last):
File stdin, line 1, in module
File neutron/tests/unit/linuxbridge/test_lb_neutron_agent.py, line
 29,
 in module
  from neutron.plugins.linuxbridge.agent import
 linuxbridge_neutron_agent
File
 neutron/plugins/linuxbridge/agent/linuxbridge_neutron_agent.py,
 line 33, in module
  import pyudev
 ImportError: No module named pyudev

 Looks like pyudev needs to be added to requirements.txt... I've filed a
 bug:

 https://bugs.launchpad.net/neutron/+bug/1264687

 with a patch here:

 https://review.openstack.org/#/c/64333/

 Thanks again, much appreciated!
 -jay


 On 28 December 2013 13:41, Jay Pipes jaypi...@gmail.com wrote:


 Please see:

 http://paste.openstack.org/show/57627/

 This is on a brand new git clone of neutron and then running
 ./run_tests.sh
 -V (FWIW, the same behavior occurs when running with tox -epy27 as
 well...)

 I have zero idea what to do...any help would be appreciated!

 It's almost like the subunit stream is being dumped as-is into the
 console.

 Best!
 -jay







 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 It looks like the problem is that there is a dependency on pyudev
 which only works properly on Linux. The neutron setup_hook does
 properly install pyudev on Linux (explains why the tests run in the
 gate), but would not work properly on windows or OS X. I assume folks
 are trying to run the tests on not Linux?


 Nope, the problem occurs on Linux. I was using Ubuntu 13.04.

 I abandoned my patch after some neutron-cores said it wasn't correct to put
 Linux-only dependencies in requirements.txt and said it was a packaging
 issue.

 The problem is that requirements.txt is *all about packaging issues*. Until
 we have some way of indicating this dependency is only for
 Linux/Windows/whatever in our requirements.txt files, this is going to be a
 pain in the butt.


 Neutron may want to do
 something similar to what Nova does when libvirt is not importable,

 https://git.openstack.org/cgit/openstack/nova/tree/nova/tests/virt/libvirt/test_libvirt.py#n77
 and use a fake in order to get the tests to run properly anyways.


 Possible, but that's just a hack at its core. Fakes should be used to speed
 up unit tests where all you're testing is the interface between the
 faked-out object and the calling object, not whether or not the real object
 

Re: [openstack-dev] [Neutron] Nominate Oleg Bondarev for Core

2014-02-16 Thread Armando M.
+1
On Feb 13, 2014 5:52 PM, Nachi Ueno na...@ntti3.com wrote:

 +1

 2014年2月12日水曜日、Mayur Patilram.nath241...@gmail.comさんは書きました:

 +1

 *--*
 *Cheers,*
 *Mayur*


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Fixes for the alembic migration (sqlite + postgress) aren't being reviewed

2014-02-20 Thread Armando M.
Thomas,

I feel your frustration, however before complaining please do follow
the actual chain of events.

Patch [1]: I asked a question which I never received an answer to.
Patch [2]: I did put a -1, but I have nothing against this patch per
se. This was only been recently abandoned and my -1 lied primarily to
give patch [1] the opportunity to be resumed.

No action on a negative review means automatic expiration, if you lose
interest in something you care about whose fault is that?

A.

[1] = https://review.openstack.org/#/c/52757
[2] = https://review.openstack.org/#/c/68611

On 19 February 2014 06:28, Thomas Goirand z...@debian.org wrote:
 Hi,

 I've seen this one:
 https://review.openstack.org/#/c/68611/

 which is suppose to fix something for Postgress. This is funny, because
 I was doing the exact same patch for fixing it for SQLite. Though this
 was before the last summit in HK.

 Since then, I just gave up on having my Debian specific patch [1] being
 upstreamed. No review, despite my insistence. Mark, on the HK summit,
 told me that it was pending discussion about what would be the policy
 for SQLite.

 Guys, this is disappointing. That's the 2nd time the same patch is being
 blocked, with no explanations.

 Could 2 core reviewers have a *serious* look at this patch, and explain
 why it's not ok for it to be approved? If nobody says why, then could
 this be approved, so we can move on?

 Cheers,

 Thomas Goirand (zigo)

 [1]
 http://anonscm.debian.org/gitweb/?p=openstack/neutron.git;a=blob;f=debian/patches/fix-alembic-migration-with-sqlite3.patch;h=9108b45aaaf683e49b15338bacd813e50e9f563d;hb=b44e96d9e1d750e35513d63877eb05f167a175d8

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Fixes for the alembic migration (sqlite + postgress) aren't being reviewed

2014-02-20 Thread Armando M.
On 20 February 2014 14:13, Vincent Untz vu...@suse.com wrote:
 Le jeudi 20 février 2014, à 12:02 -0800, Armando M. a écrit :
 Thomas,

 I feel your frustration, however before complaining please do follow
 the actual chain of events.

 Patch [1]: I asked a question which I never received an answer to.
 Patch [2]: I did put a -1, but I have nothing against this patch per
 se. This was only been recently abandoned and my -1 lied primarily to
 give patch [1] the opportunity to be resumed.

 Well, I did reply to your comment on the same day, so I'm not sure what
 else I, as submitter, could have done more to address your comment and
 convince you to change the -1 to +1.

 No action on a negative review means automatic expiration, if you lose
 interest in something you care about whose fault is that?

 I beg to disagree. If we let patches go to automatic expiration, then we
 as a project will just lose contributors. I don't think we should accept
 that as a fatality.

The power to restore a change is the hands of the contributor, not the reviewer.

Issues have different priorities and people shouldn't feel singled out
if their changes lose steam. The best course of action is to keep
sticking by them until the light at the end of the tunnel is in sight
:)

That said, I think one of issue that affect the delay of approvals of
patches dealing with DB migrations (that apply across multiple Neutron
releases) is the lack of a stable CI job (like Grenade) that validate
them and relieve the core reviewer of some burden of going through the
patch, the testbed etc.

This is coming though, we just need to be more patient, venting
frustration doesn't fix code!

A.


 I just restored the patch, btw :-)

 Vincent

 A.

 [1] = https://review.openstack.org/#/c/52757
 [2] = https://review.openstack.org/#/c/68611

 On 19 February 2014 06:28, Thomas Goirand z...@debian.org wrote:
  Hi,
 
  I've seen this one:
  https://review.openstack.org/#/c/68611/
 
  which is suppose to fix something for Postgress. This is funny, because
  I was doing the exact same patch for fixing it for SQLite. Though this
  was before the last summit in HK.
 
  Since then, I just gave up on having my Debian specific patch [1] being
  upstreamed. No review, despite my insistence. Mark, on the HK summit,
  told me that it was pending discussion about what would be the policy
  for SQLite.
 
  Guys, this is disappointing. That's the 2nd time the same patch is being
  blocked, with no explanations.
 
  Could 2 core reviewers have a *serious* look at this patch, and explain
  why it's not ok for it to be approved? If nobody says why, then could
  this be approved, so we can move on?
 
  Cheers,
 
  Thomas Goirand (zigo)
 
  [1]
  http://anonscm.debian.org/gitweb/?p=openstack/neutron.git;a=blob;f=debian/patches/fix-alembic-migration-with-sqlite3.patch;h=9108b45aaaf683e49b15338bacd813e50e9f563d;hb=b44e96d9e1d750e35513d63877eb05f167a175d8
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 --
 Les gens heureux ne sont pas pressés.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] False Positive testing for 3rd party CI

2014-02-21 Thread Armando M.
Nice one!

On 21 February 2014 11:22, Aaron Rosen aaronoro...@gmail.com wrote:
 This should fix the false positive for brocade:
 https://review.openstack.org/#/c/75486/

 Aaron


 On Fri, Feb 21, 2014 at 10:34 AM, Aaron Rosen aaronoro...@gmail.com wrote:

 Hi,

 Yesterday, I pushed a patch to review and was surprised that several of
 the third party CI systems reported back that the patch-set worked where it
 definitely shouldn't have. Anyways, I tested out my theory a little more and
 it turns out a few of the 3rd party CI systems for neutron are just
 returning  SUCCESS even if the patch set didn't run successfully
 (https://review.openstack.org/#/c/75304/).

 Here's a short summery of what I found.

 Hyper-V CI -- This seems like an easy fix as it's posting build
 succeeded but also puts to the side test run failed. Would probably be a
 good idea to remove the build succeeded message to avoid any confusion.


 Brocade CI - From the log files it posts it shows that it tries to apply
 my patch but fails:

 2014-02-20 20:23:48 + cd /opt/stack/neutron
 2014-02-20 20:23:48 + git fetch
 https://review.openstack.org/openstack/neutron.git refs/changes/04/75304/1
 2014-02-20 20:24:00 From https://review.openstack.org/openstack/neutron
 2014-02-20 20:24:00  * branchrefs/changes/04/75304/1 -
 FETCH_HEAD
 2014-02-20 20:24:00 + git checkout FETCH_HEAD
 2014-02-20 20:24:00 error: Your local changes to the following files would
 be overwritten by checkout:
 2014-02-20 20:24:00  etc/neutron/plugins/ml2/ml2_conf_brocade.ini
 2014-02-20 20:24:00
  neutron/plugins/ml2/drivers/brocade/mechanism_brocade.py
 2014-02-20 20:24:00 Please, commit your changes or stash them before you
 can switch branches.
 2014-02-20 20:24:00 Aborting
 2014-02-20 20:24:00 + cd /opt/stack/neutron

 but still continues running (without my patchset) and reports success. --
 This actually looks like a devstack bug  (i'll check it out).

 PLUMgrid CI - Seems to always vote +1 without a failure
 (https://review.openstack.org/#/dashboard/10117) though the logs are private
 so we can't really tell whats going on.

 I was thinking it might be worth while or helpful to have a job that tests
 that CI is actually fails when we expect it to.

 Best,

 Aaron



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] minimal scope covered by third-party testing

2014-04-04 Thread Armando M.
Hi Simon,

You are absolutely right in your train of thoughts: unless the
third-party CI monitors and vets all the potential changes it cares
about there's always a chance something might break. This is why I
think it's important that each Neutron third party CI should not only
test Neutron changes, but also Nova's, DevStack's and Tempest's.
Filters may be added to test only the relevant subtrees.

For instance, the VMware CI runs the full suite of tempest smoke
tests, as they come from upstream and it vets all the changes that go
in Tempest made to API and scenario tests as well as configuration
changes. As for Nova, we test changes to the vif parts, and for
DevStack, we validate changes made to lib/neutron*.

Vetting all the changes coming in VS only the ones that can
potentially break third-party support is a balancing act when you
don't have infinite resources at your disposal, or you're just ramping
up the CI infrastructure.

Cheers,
Armando

On 4 April 2014 02:00, Simon Pasquier simon.pasqu...@bull.net wrote:
 Hi Salvatore,

 On 03/04/2014 14:56, Salvatore Orlando wrote:
 Hi Simon,

 snip

 I hope stricter criteria will be enforced for Juno; I personally think
 every CI should run at least the smoketest suite for L2/L3 services (eg:
 load balancer scenario will stay optional).

 I had a little thinking about this and I feel like it might not have
 caught _immediately_ the issue Kyle talked about [1].

 Let's rewind the time line:
 1/ Change to *Nova* adding external events API is merged
 https://review.openstack.org/#/c/76388/
 2/ Change to *Neutron* notifying Nova when ports are ready is merged
 https://review.openstack.org/#/c/75253/
 3/ Change to *Nova* making libvirt wait for Neutron notifications is merged
 https://review.openstack.org/#/c/74832/

 At this point and assuming that the external ODL CI system were running
 the L2/L3 smoke tests, change #3 could have passed since external
 Neutron CI aren't voting for Nova. Instead it would have voted against
 any subsequent change to Neutron.

 Simon

 [1] https://bugs.launchpad.net/neutron/+bug/1301449


 Salvatore

 [1] https://review.openstack.org/#/c/75304/



 On 3 April 2014 12:28, Simon Pasquier simon.pasqu...@bull.net
 mailto:simon.pasqu...@bull.net wrote:

 Hi,

 I'm looking at [1] but I see no requirement of which Tempest tests
 should be executed.

 In particular, I'm a bit puzzled that it is not mandatory to boot an
 instance and check that it gets connected to the network. To me, this is
 the very minimum for asserting that your plugin or driver is working
 with Neutron *and* Nova (I'm not even talking about security groups). I
 had a quick look at the existing 3rd party CI systems and I found none
 running this kind of check (correct me if I'm wrong).

 Thoughts?

 [1] https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers
 --
 Simon Pasquier
 Software Engineer (OpenStack Expertise Center)
 Bull, Architect of an Open World
 Phone: + 33 4 76 29 71 49 tel:%2B%2033%204%2076%2029%2071%2049
 http://www.bull.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Stop logging non-exceptional conditions as ERROR

2013-11-28 Thread Armando M.
I have been doing so in the number of patches I pushed to reduce error
traces due to the communication between server and dhcp agent.

I wanted to take care of the l3 agent too, but one thing I noticed is
that I couldn't find a log for it (I mean on the artifacts that are
published at job's completion). Actually, I couldn't find an l3 agent
started by devstack either.

Am I missing something?

On 27 November 2013 09:08, Salvatore Orlando sorla...@nicira.com wrote:
 Thanks Maru,

 This is something my team had on the backlog for a while.
 I will push some patches to contribute towards this effort in the next few
 days.

 Let me know if you're already thinking of targeting the completion of this
 job for a specific deadline.

 Salvatore


 On 27 November 2013 17:50, Maru Newby ma...@redhat.com wrote:

 Just a heads up, the console output for neutron gate jobs is about to get
 a lot noisier.  Any log output that contains 'ERROR' is going to be dumped
 into the console output so that we can identify and eliminate unnecessary
 error logging.  Once we've cleaned things up, the presence of unexpected
 (non-whitelisted) error output can be used to fail jobs, as per the
 following Tempest blueprint:

 https://blueprints.launchpad.net/tempest/+spec/fail-gate-on-log-errors

 I've filed a related Neutron blueprint for eliminating the unnecessary
 error logging:


 https://blueprints.launchpad.net/neutron/+spec/log-only-exceptional-conditions-as-error

 I'm looking for volunteers to help with this effort, please reply in this
 thread if you're willing to assist.

 Thanks,


 Maru
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] - Location for common third-party libs?

2014-06-16 Thread Armando M.
I believe the Brocade's mech driver might have the same problem.

That said, if the content of the rpm that installs the BigSwitch plugin is
just the sub-tree for bigswitch (plus the config files, perhaps), you might
get away with this issue by just installing the bigswitch-plugin package. I
assume you tried that and didn't work?

I was unable to find the rpm specs for CentOS to confirm.

A.


On 17 June 2014 00:02, Kevin Benton blak...@gmail.com wrote:

 Hello,

 In the Big Switch ML2 driver, we rely on quite a bit of code from the Big
 Switch plugin. This works fine for distributions that include the entire
 neutron code base. However, some break apart the neutron code base into
 separate packages. For example, in CentOS I can't use the Big Switch ML2
 driver with just ML2 installed because the Big Switch plugin directory is
 gone.

 Is there somewhere where we can put common third party code that will be
 safe from removal during packaging?


 Thanks
 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] - Location for common third-party libs?

2014-06-17 Thread Armando M.
I don't think that a common area as being proposed is a silver bullet for
solving packaging issues, such as this one. Knowing that the right source
tree bits are dropped onto the file system is not enough to guarantee that
the end-to-end solution will work on a specific distro. Other issues may
arise after configuration and execution.

IMO, this is a bug in the packages spec, and should be taken care of during
the packaging implementation, testing and validation.

That said, I think the right approach is to provide a 'python-neutron'
package that installs the entire source tree; the specific plugin package
can then take care of the specifics, like config files.

Armando


On 17 June 2014 06:43, Shiv Haris sha...@brocade.com wrote:

 Right Armando.

 Brocade’s mech driver problem is due to NETCONF templates - would also
 prefer to see a common area for such templates – not just common code.

 Sort of like:

 common/brocade/templates
 common/bigswitch/*

 -Shiv
 From: Armando M. arma...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron] - Location for common third-party
 libs?

 I believe the Brocade's mech driver might have the same problem.

 That said, if the content of the rpm that installs the BigSwitch plugin is
 just the sub-tree for bigswitch (plus the config files, perhaps), you might
 get away with this issue by just installing the bigswitch-plugin package. I
 assume you tried that and didn't work?

 I was unable to find the rpm specs for CentOS to confirm.

 A.


 On 17 June 2014 00:02, Kevin Benton blak...@gmail.com wrote:

 Hello,

 In the Big Switch ML2 driver, we rely on quite a bit of code from the Big
 Switch plugin. This works fine for distributions that include the entire
 neutron code base. However, some break apart the neutron code base into
 separate packages. For example, in CentOS I can't use the Big Switch ML2
 driver with just ML2 installed because the Big Switch plugin directory is
 gone.

 Is there somewhere where we can put common third party code that will be
 safe from removal during packaging?


 Thanks
 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A modest proposal to reduce reviewer load

2014-06-17 Thread Armando M.
I wonder what the turnaround of trivial patches actually is, I bet you it's
very very small, and as Daniel said, the human burden is rather minimal (I
would be more concerned about slowing them down in the gate, but I digress).

I think that introducing a two-tier level for patch approval can only
mitigate the problem, but I wonder if we'd need to go a lot further, and
rather figure out a way to borrow concepts from queueing theory so that
they can be applied in the context of Gerrit. For instance Little's law [1]
says:

The long-term average number of customers (in this context *reviews*) in a
stable system L is equal to the long-term average effective arrival rate,
λ, multiplied by the average time a customer spends in the system, W; or
expressed algebraically: L = λW.

L can be used to determine the number of core reviewers that a project will
need at any given time, in order to meet a certain arrival rate and average
time spent in the queue. If the number of core reviewers is a lot less than
L then that core team is understaffed and will need to increase.

If we figured out how to model and measure Gerrit as a queuing system, then
we could improve its performance a lot more effectively; for instance, this
idea of privileging trivial patches over longer patches has roots in a
popular scheduling policy [3] for  M/G/1 queues, but that does not really
help aging of 'longer service time' patches and does not have a preemption
mechanism built-in to avoid starvation.

Just a crazy opinion...
Armando

[1] - http://en.wikipedia.org/wiki/Little's_law
[2] - http://en.wikipedia.org/wiki/Shortest_job_first
[3] - http://en.wikipedia.org/wiki/M/G/1_queue


On 17 June 2014 14:12, Matthew Booth mbo...@redhat.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 17/06/14 12:36, Sean Dague wrote:
  On 06/17/2014 07:23 AM, Daniel P. Berrange wrote:
  On Tue, Jun 17, 2014 at 11:04:17AM +0100, Matthew Booth wrote:
  We all know that review can be a bottleneck for Nova
  patches.Not only that, but a patch lingering in review, no
  matter how trivial, will eventually accrue rebases which sap
  gate resources, developer time, and will to live.
 
  It occurs to me that there are a significant class of patches
  which simply don't require the attention of a core reviewer.
  Some examples:
 
  * Indentation cleanup/comment fixes * Simple code motion * File
  permission changes * Trivial fixes which are obviously correct
 
  The advantage of a core reviewer is that they have experience
  of the whole code base, and have proven their ability to make
  and judge core changes. However, some fixes don't require this
  level of attention, as they are self-contained and obvious to
  any reasonable programmer.
 
  Without knowing anything of the architecture of gerrit, I
  propose something along the lines of a '+1 (trivial)' review
  flag. If a review gained some small number of these, I suggest
  2 would be reasonable, it would be equivalent to a +2 from a
  core reviewer. The ability to set this flag would be a
  privilege. However, the bar to gaining this privilege would be
  low, and preferably automatically set, e.g. 5 accepted patches.
  It would be removed for abuse.
 
  Is this practical? Would it help?
 
  You are right that some types of fix are so straightforward that
  most reasonable programmers can validate them. At the same time
  though, this means that they also don't really consume
  significant review time from core reviewers.  So having
  non-cores' approve trivial fixes wouldn't really reduce the
  burden on core devs.
 
  The main positive impact would probably be a faster turn around
  time on getting the patches approved because it is easy for the
  trivial fixes to drown in the noise.
 
  IME any non-trivial change to gerrit is just not going to happen
  in any reasonably useful timeframe though. Perhaps an
  alternative strategy would be to focus on identifying which the
  trivial fixes are. If there was an good way to get a list of all
  pending trivial fixes, then it would make it straightforward for
  cores to jump in and approve those simple patches as a priority,
  to avoid them languishing too long.
 
  If would be nice if gerrit had simple keyword tagging so any
  reviewer can tag an existing commit as trivial, but that
  doesn't seem to exist as a concept yet.
 
  So an alternative perhaps submit trivial stuff using a well
  known topic eg
 
  # git review --topic trivial
 
  Then you can just query all changes in that topic to find easy
  stuff to approve.
 
  It could go in the commit message:
 
  TrivialFix
 
  Then could be queried with -
  https://review.openstack.org/#/q/message:TrivialFix,n,z
 
  If a reviewer felt it wasn't a trivial fix, they could just edit
  the commit message inline to drop it out.

 +1. If possible I'd update the query to filter out anything with a -1.

 Where do we document these things? I'd be happy to propose a docs update.

 Matt
 - --
 Matthew Booth
 Red 

Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture

2014-06-17 Thread Armando M.
just a provocative thought: If we used the ovsdb connection instead, do we
really need an L2 agent :P?


On 17 June 2014 18:38, Kyle Mestery mest...@noironetworks.com wrote:

 Another area of improvement for the agent would be to move away from
 executing CLIs for port commands and instead use OVSDB. Terry Wilson
 and I talked about this, and re-writing ovs_lib to use an OVSDB
 connection instead of the CLI methods would be a huge improvement
 here. I'm not sure if Terry was going to move forward with this, but
 I'd be in favor of this for Juno if he or someone else wants to move
 in this direction.

 Thanks,
 Kyle

 On Tue, Jun 17, 2014 at 11:24 AM, Salvatore Orlando sorla...@nicira.com
 wrote:
  We've started doing this in a slightly more reasonable way for icehouse.
  What we've done is:
  - remove unnecessary notification from the server
  - process all port-related events, either trigger via RPC or via monitor
 in
  one place
 
  Obviously there is always a lot of room for improvement, and I agree
  something along the lines of what Zang suggests would be more
 maintainable
  and ensure faster event processing as well as making it easier to have
 some
  form of reliability on event processing.
 
  I was considering doing something for the ovs-agent again in Juno, but
 since
  we've moving towards a unified agent, I think any new big ticket should
  address this effort.
 
  Salvatore
 
 
  On 17 June 2014 13:31, Zang MingJie zealot0...@gmail.com wrote:
 
  Hi:
 
  Awesome! Currently we are suffering lots of bugs in ovs-agent, also
  intent to rebuild a more stable flexible agent.
 
  Taking the experience of ovs-agent bugs, I think the concurrency
  problem is also a very important problem, the agent gets lots of event
  from different greenlets, the rpc, the ovs monitor or the main loop.
  I'd suggest to serialize all event to a queue, then process events in
  a dedicated thread. The thread check the events one by one ordered,
  and resolve what has been changed, then apply the corresponding
  changes. If there is any error occurred in the thread, discard the
  current processing event, do a fresh start event, which reset
  everything, then apply the correct settings.
 
  The threading model is so important and may prevent tons of bugs in
  the future development, we should describe it clearly in the
  architecture
 
 
  On Wed, Jun 11, 2014 at 4:19 AM, Mohammad Banikazemi m...@us.ibm.com
  wrote:
   Following the discussions in the ML2 subgroup weekly meetings, I have
   added
   more information on the etherpad [1] describing the proposed
   architecture
   for modular L2 agents. I have also posted some code fragments at [2]
   sketching the implementation of the proposed architecture. Please
 have a
   look when you get a chance and let us know if you have any comments.
  
   [1] https://etherpad.openstack.org/p/modular-l2-agent-outline
   [2] https://review.openstack.org/#/c/99187/
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture

2014-06-17 Thread Armando M.
Mine wasn't really a serious suggestion, Neutron's controlling logic is
already bloated as it is, and my personal opinion would be in favor of a
leaner Neutron Server rather than a more complex one; adding more
controller-like logic to it certainly goes against that direction :)

Having said that and as Vivek pointed out, using ovsdb gives us finer
control and ability to react more effectively, however, with the current
server-agent rpc framework there's no way of leveraging that...so in a
grand scheme of things I'd rather see it prioritized lower rather than
higher, to give precedence to rearchitecting the framework first.

Armando


On 17 June 2014 19:25, Narasimhan, Vivekanandan 
vivekanandan.narasim...@hp.com wrote:



 Managing the ports and plumbing logic is today driven by L2 Agent, with
 little assistance

 from controller.



 If we plan to move that functionality to the controller,  the controller
 has to be more

 heavy weight (both hardware and software)  since it has to do the job of
 L2 Agent for all

 the compute servers in the cloud. , We need to re-verify all scale numbers
 for the controller

 on POC’ing of such a change.



 That said, replacing CLI with direct OVSDB calls in the L2 Agent is
 certainly a good direction.



 Today, OVS Agent invokes flow calls of OVS-Lib but has no idea (or
 processing) to follow up

 on success or failure of such invocations.  Nor there is certain guarantee
 that all such

 flow invocations would be executed by the third-process fired by OVS-Lib
 to execute CLI.



 When we transition to OVSDB calls which are more programmatic in nature,
 we can

 enhance the Flow API (OVS-Lib) to provide more fine grained errors/return
 codes (or content)

 and ovs-agent (and even other components) can act on such return state
 more

 intelligently/appropriately.



 --

 Thanks,



 Vivek





 *From:* Armando M. [mailto:arma...@gmail.com]
 *Sent:* Tuesday, June 17, 2014 10:26 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Neutron][ML2] Modular L2 agent
 architecture



 just a provocative thought: If we used the ovsdb connection instead, do we
 really need an L2 agent :P?



 On 17 June 2014 18:38, Kyle Mestery mest...@noironetworks.com wrote:

 Another area of improvement for the agent would be to move away from
 executing CLIs for port commands and instead use OVSDB. Terry Wilson
 and I talked about this, and re-writing ovs_lib to use an OVSDB
 connection instead of the CLI methods would be a huge improvement
 here. I'm not sure if Terry was going to move forward with this, but
 I'd be in favor of this for Juno if he or someone else wants to move
 in this direction.

 Thanks,
 Kyle


 On Tue, Jun 17, 2014 at 11:24 AM, Salvatore Orlando sorla...@nicira.com
 wrote:
  We've started doing this in a slightly more reasonable way for icehouse.
  What we've done is:
  - remove unnecessary notification from the server
  - process all port-related events, either trigger via RPC or via monitor
 in
  one place
 
  Obviously there is always a lot of room for improvement, and I agree
  something along the lines of what Zang suggests would be more
 maintainable
  and ensure faster event processing as well as making it easier to have
 some
  form of reliability on event processing.
 
  I was considering doing something for the ovs-agent again in Juno, but
 since
  we've moving towards a unified agent, I think any new big ticket should
  address this effort.
 
  Salvatore
 
 
  On 17 June 2014 13:31, Zang MingJie zealot0...@gmail.com wrote:
 
  Hi:
 
  Awesome! Currently we are suffering lots of bugs in ovs-agent, also
  intent to rebuild a more stable flexible agent.
 
  Taking the experience of ovs-agent bugs, I think the concurrency
  problem is also a very important problem, the agent gets lots of event
  from different greenlets, the rpc, the ovs monitor or the main loop.
  I'd suggest to serialize all event to a queue, then process events in
  a dedicated thread. The thread check the events one by one ordered,
  and resolve what has been changed, then apply the corresponding
  changes. If there is any error occurred in the thread, discard the
  current processing event, do a fresh start event, which reset
  everything, then apply the correct settings.
 
  The threading model is so important and may prevent tons of bugs in
  the future development, we should describe it clearly in the
  architecture
 
 
  On Wed, Jun 11, 2014 at 4:19 AM, Mohammad Banikazemi m...@us.ibm.com
  wrote:
   Following the discussions in the ML2 subgroup weekly meetings, I have
   added
   more information on the etherpad [1] describing the proposed
   architecture
   for modular L2 agents. I have also posted some code fragments at [2]
   sketching the implementation of the proposed architecture. Please
 have a
   look when you get a chance and let us know if you have any comments.
  
   [1] https://etherpad.openstack.org/p/modular-l2-agent

Re: [openstack-dev] [Neutron] VMware networking

2014-06-30 Thread Armando M.
Hi Gary,

Thanks for sending this out, comments inline.

On 29 June 2014 00:15, Gary Kotton gkot...@vmware.com wrote:

  Hi,
  At the moment there are a number of different BP’s that are proposed to
 enable different VMware network management solutions. The following specs
 are in review:

1. VMware NSX-vSphere plugin: https://review.openstack.org/102720
2. Neutron mechanism driver for VMWare vCenter DVS network creation:
https://review.openstack.org/#/c/101124/
3. VMware dvSwitch/vSphere API support for Neutron ML2:
https://review.openstack.org/#/c/100810/

 In addition to this there is also talk about HP proposing some for
 of VMware network management.


I believe this is blueprint [1]. This was proposed a while ago, but now it
needs to go through the new BP review process.

[1] - https://blueprints.launchpad.net/neutron/+spec/ovsvapp-esxi-vxlan


  Each of the above has specific use case and will enable existing vSphere
 users to adopt and make use of Neutron.

  Items #2 and #3 offer a use case where the user is able to leverage and
 manage VMware DVS networks. This support will have the following
 limitations:

- Only VLANs are supported (there is no VXLAN support)
- No security groups
- #3 – the spec indicates that it will make use of pyvmomi (
https://github.com/vmware/pyvmomi). There are a number of disclaimers
here:
   - This is currently blocked regarding the integration into the
   requirements project (https://review.openstack.org/#/c/69964/)
   - The idea was to have oslo.vmware leverage this in the future (
   https://github.com/openstack/oslo.vmware)

 Item #1 will offer support for all of the existing Neutron API’s and there
 functionality. This solution will require a additional component called NSX
 (https://www.vmware.com/support/pubs/nsx_pubs.html).


It's great to see this breakdown, it's very useful in order to identify the
potential gaps and overlaps amongst the various efforts around ESX and
Neutron. This will also ensure a path towards a coherent code contribution.

 It would be great if we could all align our efforts and have some clear
 development items for the community. In order to do this I’d like suggest
 that we meet to sync and discuss all efforts. Please let me know if the
 following sounds ok for an initial meeting to discuss how we can move
 forwards:
  - Tuesday 15:00 UTC
  - IRC channel #openstack-vmware


I am available to join.



  We can discuss the following:

1. Different proposals
2. Combining efforts
3. Setting a formal time for meetings and follow ups

 Looking forwards to working on this stuff with the community and providing
 a gateway to using Neutron and further enabling the adaption of OpenStack.


I think code contribution is only one aspect of this story; my other
concern is that from a usability standpoint we would need to provide a
clear framework for users to understand what these solutions can do for
them and which one to choose.

Going forward I think it would be useful if we produced an overarching
blueprint that outlines all the ESX options being proposed for OpenStack
Networking (and the existing ones, like NSX - formerly known as NVP, or
nova-network), their benefits and drawbacks, their technical dependencies,
system requirements, API supported etc. so that a user can make an informed
decision when looking at ESX deployments in OpenStack.



  Thanks
 Gary


Cheers,
Armando


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] DVR demo and how-to

2014-06-30 Thread Armando M.
Hi folks,

The DVR team is working really hard to complete this important task for
Juno and Neutron.

In order to help see this feature in action, a video has been made
available and link can be found in [2].

There is still some work to do, however I wanted to remind you that all of
the relevant information is available on the wiki [1, 2] and Gerrit [3].

[1] - https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam
[2] - https://wiki.openstack.org/wiki/Neutron/DVR/HowTo
[3] - https://review.openstack.org/#/q/topic:bp/neutron-ovs-dvr,n,z

More to follow!

Cheers,
Armando
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] VMware networking

2014-07-14 Thread Armando M.
Sounds good to me.


On 14 July 2014 07:13, Gary Kotton gkot...@vmware.com wrote:

 Hi,
 I am sorry but I had to attend a meeting now. Can we please postpone this
 to tomorrow?
 Thanks
 Gary

 On 7/8/14, 11:19 AM, Gary Kotton gkot...@vmware.com wrote:

 Hi,
 
 Just an update and a progress report:
 
 1. Armando has created an umbrella BP -
 
 
 https://review.openstack.org/#/q/status:open+project:openstack/neutron-spe
 c
 
 s+branch:master+topic:bp/esx-neutron,n,z
 
 2. Whoever is proposing the BP’s can you please fill in the table -
 
 
 https://urldefense.proofpoint.com/v1/url?u=https://docs.google.com/documen
 t/d/1vkfJLZjIetPmGQ6GMJydDh8SSWz60iUhuuKhYMJk=oIvRg1%2BdGAgOoM1BIlLLqw%3D
 %3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=SvPJghzudWc
 d764hV5HdpNELoKWhcqrGB2hyww4WB90%3D%0As=74fad114ce48f985c58e1b4e1bdc7efa2
 ed2376034e7ebd8cb82f0829915cf01
 
 qoz8/edit?usp=sharing
 
 Lets meet again next week Monday at the same time and same place and plan
 
 future steps. How does that sound?
 
 Thanks
 
 Gary
 
 
 
 On 7/2/14, 2:27 PM, Gary Kotton gkot...@vmware.com wrote:
 
 
 
 Hi,
 
 Sadly last night night we did not have enough people to make any
 progress.
 
 Lets try again next week Monday at 14:00 UTC. The meeting will take place
 
 on #openstack-vmware channel
 
 Alut a continua
 
 Gary
 
 
 
 On 6/30/14, 6:38 PM, Kyle Mestery mest...@noironetworks.com wrote:
 
 
 
 On Mon, Jun 30, 2014 at 10:18 AM, Armando M. arma...@gmail.com wrote:
 
  Hi Gary,
 
 
 
  Thanks for sending this out, comments inline.
 
 
 
 Indeed, thanks Gary!
 
 
 
  On 29 June 2014 00:15, Gary Kotton gkot...@vmware.com wrote:
 
 
 
  Hi,
 
  At the moment there are a number of different BP¹s that are proposed
 
 to
 
  enable different VMware network management solutions. The following
 
 specs
 
  are in review:
 
 
 
  VMware NSX-vSphere plugin: https://review.openstack.org/102720
 
  Neutron mechanism driver for VMWare vCenter DVS network
 
  creation:https://review.openstack.org/#/c/101124/
 
  VMware dvSwitch/vSphere API support for Neutron ML2:
 
  https://review.openstack.org/#/c/100810/
 
 
 
 I've commented in these reviews about combining efforts here, I'm glad
 
 you're taking the lead to make this happen Gary. This is much
 
 appreciated!
 
 
 
  In addition to this there is also talk about HP proposing some for of
 
  VMware network management.
 
 
 
 
 
  I believe this is blueprint [1]. This was proposed a while ago, but
 now
 
 it
 
  needs to go through the new BP review process.
 
 
 
  [1] -
 
 
 https://urldefense.proofpoint.com/v1/url?u=https://blueprints.launchpad
 .
 
 n
 
 et/neutron/%2Bspec/ovsvapp-esxi-vxlank=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%
 0
 
 A
 
 r=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=MX5q1Rh4UyhnoZ
 u
 
 1
 
 a8dOes8mbE9NM9gvjG2PnJXhUU0%3D%0As=622a539e40b3b950c25f0b6cabf05bc81bb
 6
 
 1
 
 159077c00f12d7882680e84a18b
 
 
 
 
 
  Each of the above has specific use case and will enable existing
 
 vSphere
 
  users to adopt and make use of Neutron.
 
 
 
  Items #2 and #3 offer a use case where the user is able to leverage
 
 and
 
  manage VMware DVS networks. This support will have the following
 
  limitations:
 
 
 
  Only VLANs are supported (there is no VXLAN support)
 
  No security groups
 
  #3 ­ the spec indicates that it will make use of pyvmomi
 
 
 
 (
 https://urldefense.proofpoint.com/v1/url?u=https://github.com/vmware/
 p
 
 y
 
 vmomik=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2Bf
 D
 
 t
 
 ysg45MkPhCZFxPEq8%3D%0Am=MX5q1Rh4UyhnoZu1a8dOes8mbE9NM9gvjG2PnJXhUU0%
 3
 
 D
 
 %0As=436b19122463f2b30a5b7fa31880f56ad0127cdaf0250999eba43564f8b559b9
 )
 
 .
 
  There are a number of disclaimers here:
 
 
 
  This is currently blocked regarding the integration into the
 
 requirements
 
  project (https://review.openstack.org/#/c/69964/)
 
  The idea was to have oslo.vmware leverage this in the future
 
 
 
 (
 https://urldefense.proofpoint.com/v1/url?u=https://github.com/opensta
 c
 
 k
 
 /oslo.vmwarek=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgo
 M
 
 Q
 
 u%2BfDtysg45MkPhCZFxPEq8%3D%0Am=MX5q1Rh4UyhnoZu1a8dOes8mbE9NM9gvjG2Pn
 J
 
 X
 
 hUU0%3D%0As=e1559fa7ae956d02efe8a65e356f8f0dbfd8a276e5f2e0a4761894e17
 1
 
 6
 
 84b03)
 
 
 
  Item #1 will offer support for all of the existing Neutron API¹s and
 
 there
 
  functionality. This solution will require a additional component
 
 called NSX
 
  (https://www.vmware.com/support/pubs/nsx_pubs.html).
 
 
 
 
 
  It's great to see this breakdown, it's very useful in order to
 identify
 
 the
 
  potential gaps and overlaps amongst the various efforts around ESX and
 
  Neutron. This will also ensure a path towards a coherent code
 
 contribution.
 
 
 
  It would be great if we could all align our efforts and have some
 
 clear
 
  development items for the community. In order to do this I¹d like
 
 suggest
 
  that we meet to sync and discuss all efforts. Please let me know

Re: [openstack-dev] [Neutron] [Spec freeze exception] VMware DVS support

2014-07-21 Thread Armando M.
I think the specs under the umbrella one can be approved/treated
individually.

The umbrella one is an informational blueprint, there is not going to be
code associated with it, however before approving it (and the individual
ones) we'd need all the parties interested in vsphere support for Neutron
to reach an agreement as to what the code will look like so that the
individual contributions being proposed are not going to clash with each
other or create needless duplication.




On 21 July 2014 06:11, Kyle Mestery mest...@mestery.com wrote:

 On Sun, Jul 20, 2014 at 4:21 AM, Gary Kotton gkot...@vmware.com wrote:
  Hi,
  I would like to propose the following for spec freeze exception:
 
  https://review.openstack.org/#/c/105369
 
  This is an umbrella spec for a number of VMware DVS support specs. Each
 has
  its own unique use case and will enable a lot of existing VMware DVS
 users
  to start to use OpenStack.
 
  For https://review.openstack.org/#/c/102720/ we have the following
 which we
  can post when the internal CI for the NSX-v is ready (we are currently
  working on this):
   - core plugin functionality
   - layer 3 support
   - security group support
 
 Do we need to approve all the under the umbrella specs as well?

  Thanks
  Gary
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [Spec freeze exception] VMware DVS support

2014-07-21 Thread Armando M.
That would be my thinking as well, but if we managed to make an impressive
progress from now until the Feature Freeze proposal deadline, I'd be
willing to reevaluate the situation.

A.


On 21 July 2014 12:13, Kyle Mestery mest...@mestery.com wrote:

 On Mon, Jul 21, 2014 at 2:03 PM, Armando M. arma...@gmail.com wrote:
  I think the specs under the umbrella one can be approved/treated
  individually.
 
  The umbrella one is an informational blueprint, there is not going to be
  code associated with it, however before approving it (and the individual
  ones) we'd need all the parties interested in vsphere support for
 Neutron to
  reach an agreement as to what the code will look like so that the
 individual
  contributions being proposed are not going to clash with each other or
  create needless duplication.
 
 That's what I was thinking as well. So, given where we're at in Juno,
 I'm leaning towards having all of this consensus building happen now
 and we can start the Kilo cycle with these BPs in agreement from all
 contributors.

 Does that sound ok?

 Thanks,
 Kyle

 
 
 
  On 21 July 2014 06:11, Kyle Mestery mest...@mestery.com wrote:
 
  On Sun, Jul 20, 2014 at 4:21 AM, Gary Kotton gkot...@vmware.com
 wrote:
   Hi,
   I would like to propose the following for spec freeze exception:
  
   https://review.openstack.org/#/c/105369
  
   This is an umbrella spec for a number of VMware DVS support specs.
 Each
   has
   its own unique use case and will enable a lot of existing VMware DVS
   users
   to start to use OpenStack.
  
   For https://review.openstack.org/#/c/102720/ we have the following
 which
   we
   can post when the internal CI for the NSX-v is ready (we are currently
   working on this):
- core plugin functionality
- layer 3 support
- security group support
  
  Do we need to approve all the under the umbrella specs as well?
 
   Thanks
   Gary
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Spec exceptions are closed, FPF is August 21

2014-07-31 Thread Armando M.
It is not my intention debating, pointing fingers and finding culprits,
these issues can be addressed in some other context.

I am gonna say three things:

1) If a core-reviewer puts a -2, there must be a good reason for it. If
other reviewers blindly move on as some people seem to imply here, then
those reviewers should probably not review the code at all! My policy is to
review all the code I am interested in/I can, regardless of the score. My
-1 may be someone's +1 (or vice versa), so 'trusting' someone else's vote
is the wrong way to go about this.

2) If we all feel that this feature is important (which I am not sure it
was being marked as 'low' in oslo, not sure how it was tracked in Neutron),
there is the weekly IRC Neutron meeting to raise awareness, since all cores
participate; to the best of my knowledge we never spoke (or barely) of the
rootwrap work.

3) If people do want this work in Juno (Carl being one of them), we can
figure out how to make one final push, and assess potential regression. We
'rushed' other features late in cycle in the past (like nova/neutron event
notifications) and if we keep this disabled by default in Juno, I don't
think it's really that risky. I can work with Carl to give the patches some
more love.

Armando



On 31 July 2014 15:40, Rudra Rugge ru...@contrailsystems.com wrote:

 Hi Kyle,

 I also agree with Mandeep's suggestion of putting a time frame on the
 lingering -2 if the addressed concerns have been taken care of. In my
 experience also a sticky -2 detracts other reviewers from reviewing an
 updated patch.

 Either a time-frame or a possible override by PTL (move to -1) would help
 make progress on the review.

 Regards,
 Rudra


 On Thu, Jul 31, 2014 at 2:29 PM, Mandeep Dhami dh...@noironetworks.com
 wrote:

 Hi Kyle:

 As -2 is sticky, and as there exists a possibility that the original core
 might not get time to get back to re-reviewing his, do you think that there
 should be clearer guidelines on it's usage (to avoid what you identified as
 dropping of the balls)?

 Salvatore had a good guidance in a related thread [0], do you agree with
 something like that?


 I try to avoid -2s as much as possible. I put a -2 only when I reckon your
 patch should never be merged because it'll make the software unstable or
 tries to solve a problem that does not exist. -2s stick across patches and
 tend to put off other reviewers.

 [0]
 http://lists.openstack.org/pipermail/openstack-dev/2014-July/041339.html


 Or do you think that 3-5 days after an update that addresses the issues
 identified in the original -2, we should automatically remove that -2? If
 this does not happen often, this process does not have to be automated,
 just an exception that the PTL can exercise to address issues where the
 original reason for -2 has been addressed and nothing new has been
 identified?



 On Thu, Jul 31, 2014 at 11:25 AM, Kyle Mestery mest...@mestery.com
 wrote:

 On Thu, Jul 31, 2014 at 7:11 AM, Yuriy Taraday yorik@gmail.com
 wrote:
  On Wed, Jul 30, 2014 at 11:52 AM, Kyle Mestery mest...@mestery.com
 wrote:
  and even less
  possibly rootwrap [3] if the security implications can be worked out.
 
  Can you please provide some input on those security implications that
 are
  not worked out yet?
  I'm really surprised to see such comments in some ML thread not
 directly
  related to the BP. Why is my spec blocked? Neither spec [1] nor code
 (which
  is available for a really long time now [2] [3]) can get enough
 reviewers'
  attention because of those groundless -2's. Should I abandon these
 change
  requests and file new ones to get some eyes on my code and proposals?
 It's
  just getting ridiculous. Let's take a look at timeline, shall we?
 
 I share your concerns here as well, and I'm sorry you've had a bad
 experience working with the community here.

  Mar, 25 - first version of the first part of Neutron code is published
 at
  [2]
  Mar, 28 - first reviewers come and it gets -1'd by Mark because of
 lack of
  BP (thankful it wasn't -2 yet, so reviews continued)
  Apr, 1 - Both Oslo [5] and Neturon [6] BPs are created;
  Apr, 2 - first version of the second part of Neutron code is published
 at
  [3];
  May, 16 - first version of Neutron spec is published at [1];
  May, 19 - Neutron spec gets frozen by Mark's -2 (because Oslo BP is not
  approved yet);
  May, 21 - first part of Neutron code [2] is found generally OK by
 reviewers;
  May, 21 - first version of Oslo spec is published at [4];
  May, 29 - a version of the second part of Neutron code [3] is
 published that
  later raises only minor comments by reviewers;
  Jun, 5 - both parts of Neutron code [2] [3] get frozen by -2 from Mark
  because BP isn't approved yet;
  Jun, 23 - Oslo spec [4] is mostly ironed out;
  Jul, 8 - Oslo spec [4] is merged, Neutron spec immediately gets +1 and
 +2;
  Jul, 20 - SAD kicks in, no comments from Mark or anyone on blocked
 change
  requests;
  Jul, 24 - in response to Kyle's 

Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-04 Thread Armando M.
Hi,

When I think about Group-Based Policy I cannot help myself but think about
the degree of variety of sentiments (for lack of better words) that this
subject has raised over the past few months on the mailing list and/or
other venues.

I speak for myself when I say that when I look at the end-to-end
Group-Based Policy functionality I am not entirely sold on the following
points:

- The abstraction being proposed, its relationship with the Neutron API and
ODL;
- The way the reference implementation has been introduced into the
OpenStack world, and Neutron in particular;
- What an evolution of Group-Based Policy means going forward if we use the
proposed approach as a foundation for a more application-friendly and
intent-driven API abstraction going forward.
- The way we used development tools for bringing Neutron developers
(reviewers and committers), application developers, operators, and users
together around these new concepts.

Can I speak for everybody when I say that we do not have a consensus across
the board on all/some/other points being touched in this thread or other
threads? I think I can: I have witnessed that there is *NOT* such a
consensus. If I am asked where I stand, my position is that I wouldn't mind
to see how Group-Based Policy as we know it kick the tires; would I love to
see it do that in a way that's not disruptive to the Neutron project? YES,
I would love to.

So, where do we go from here? Do we need a consensus on such a delicate
area? I think we do.

I think Mark's intent, or anyone's who has at his/her heart the interest of
the Neutron community as a whole, is to make sure that we find a compromise
which everyone is comfortable with.

Do we vote about what we do next? Do we leave just cores to vote? I am not
sure. But one thing is certain, we cannot keep procrastinating as the Juno
window is about to expire.

I am sure that there are people hitching to get their hands on Group-Based
Policy, however the vehicle whereby this gets released should be irrelevant
to them; at the same time I appreciate that some people perceive Stackforge
projects as not as established and mature as other OpenStack projects; that
said wouldn't be fair to say that Group-Based Policy is exactly that? If
this means that other immature abstractions would need to follow suit, I
would be all in for this more decentralized approach. Can we do that now,
or do we postpone this discussion for the Kilo Summit? I don't know.

I realize that I have asked more questions than the answers I tried to
give, but I hope we can all engage in a constructive discussion.

Cheers,
Armando

PS: Salvatore I expressly stayed away from the GBP acronym you love so
much, so please read the thread and comment on it :)

On 4 August 2014 15:54, Ivar Lazzaro ivarlazz...@gmail.com wrote:

 +1 Hemanth.


 On Tue, Aug 5, 2014 at 12:24 AM, Hemanth Ravi hemanthrav...@gmail.com
 wrote:

 Hi,

 I believe that the API has been reviewed well both for its usecases and
 correctness. And the blueprint has been approved after sufficient exposure
 of the API in the community. The best way to enable users to adopt GBP is
 to introduce this in Juno rather than as a project in StackForge. Just as
 in other APIs any evolutionary changes can be incorporated, going forward.

 OS development processes are being followed in the implementation to make
 sure that there is no negative impact on Neutron stability with the
 inclusion of GBP.

 Thanks,
 -hemanth


 On Mon, Aug 4, 2014 at 1:27 PM, Mark McClain mmccl...@yahoo-inc.com
 wrote:

  All-

 tl;dr

 * Group Based Policy API is the kind of experimentation we be should
 attempting.
 * Experiments should be able to fail fast.
 * The master branch does not fail fast.
 * StackForge is the proper home to conduct this experiment.


 Why this email?
 ---
 Our community has been discussing and working on Group Based Policy
 (GBP) for many months.  I think the discussion has reached a point where we
 need to openly discuss a few issues before moving forward.  I recognize
 that this discussion could create frustration for those who have invested
 significant time and energy, but the reality is we need ensure we are
 making decisions that benefit all members of our community (users,
 operators, developers and vendors).

 Experimentation
 
 I like that as a community we are exploring alternate APIs.  The process
 of exploring via real user experimentation can produce valuable results.  A
 good experiment should be designed to fail fast to enable further trials
 via rapid iteration.

 Merging large changes into the master branch is the exact opposite of
 failing fast.

 The master branch deliberately favors small iterative changes over time.
  Releasing a new version of the proposed API every six months limits our
 ability to learn and make adjustments.

 In the past, we’ve released LBaaS, FWaaS, and VPNaaS as experimental
 APIs.  The results have been very mixed as operators either shy 

Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Armando M.
This thread is moving so fast I can't keep up!

The fact that troubles me is that I am unable to grasp how we move forward,
which was the point of this thread to start with. It seems we have 2
options:

- We make GBP to merge as is, in the Neutron tree, with some minor revision
(e.g. naming?);
- We make GBP a stackforge project, that integrates with Neutron in some
shape or form;

Another option, might be something in between, where GBP is in tree, but in
some sort of experimental staging area (even though I am not sure how well
baked this idea is).

Now, as a community we all need make a decision; arguing about the fact
that the blueprint was approved is pointless. As a matter of fact, I think
that blueprint should be approved, if and only if the code has landed
completely, but I digress!

Let's together come up with pros and cons of each approach and come up with
an informed decision.

Just reading free form text, how are we expected to do that? At least I
can't!

My 2c.
Armando


On 6 August 2014 15:03, Aaron Rosen aaronoro...@gmail.com wrote:




 On Wed, Aug 6, 2014 at 12:46 PM, Kevin Benton blak...@gmail.com wrote:

 I believe the referential security group rules solve this problem
 (unless I'm not understanding):

 I think the disconnect is that you are comparing the way to current
 mapping driver implements things for the reference implementation with the
 existing APIs. Under this light, it's not going to look like there is a
 point to this code being in Neutron since, as you said, the abstraction
 could happen at a client. However, this changes once new mapping drivers
 can be added that implement things differently.

 Let's take the security groups example. Using the security groups API
 directly is imperative (put a firewall rule on this port that blocks this
 IP) compared to a higher level declarative abstraction (make sure these
 two endpoints cannot communicate). With the former, the ports must support
 security groups and there is nowhere except for the firewall rules on that
 port to implement it without violating the user's expectation. With the
 latter, a mapping driver could determine that communication between these
 two hosts can be prevented by using an ACL on a router or a switch, which
 doesn't violate the user's intent and buys a performance improvement and
 works with ports that don't support security groups.

 Group based policy is trying to move the requests into the declarative
 abstraction so optimizations like the one above can be made.


 Hi Kevin,

 Interesting points. Though, let me ask this. Why do we need to move to a
 declarative API abstraction in neutron in order to perform this
 optimization on the backend? For example, In the current neutron model say
 we want to create a port with a security group attached to it called web
 that allows TCP:80 in and members who are in a security group called
 database. From this mapping I fail to see how it's really any different
 from the declarative model? The ports in neutron are logical abstractions
 and the backend system could be implemented in order to determine that the
 communication between these two hosts could be prevented by using an ACL on
 a router or switch as well.

 Best,

 Aaron



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Armando M.
On 6 August 2014 15:47, Kevin Benton blak...@gmail.com wrote:

 I think we should merge it and just prefix the API for now with
 '/your_application_will_break_after_juno_if_you_use_this/'


And you make your call based and what pros and cons exactly, If I am ask?

Let me start:

Option 1:
  - pros
- immediate delivery vehicle for consumption by operators
  - cons
- code is burder from a number of standpoints (review, test, etc)

Option 2:
  - pros
- enable a small set of Illuminati to iterate faster
  - cons
- integration burden with other OpenStack projects (keystone, nova,
neutron, etc)

Cheers,
Armando
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Armando M.

 This is probably not intentional from your part ,but your choice of words
 make it seem that you are deriding the efforts of the team behind this
 effort. While i may disagree technically here and there with their current
 design, it seems to me that the effort in question is rather broad based in
 terms of support (from multiple different organizations) and that the team
 has put a non trivial effort in making the effort public. I don't think we
 can characterize the team either as a secret group or a small set.


You misread me completely, please refrain from making these comments: I
deride no-one.

I chose the word in reference to the Enlightenment movement, with emphasis
to breaking the traditional way of thinking (declarative vs imperative),
and I found that the analogy would stick, but apparently not.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Armando M.
On 6 August 2014 17:34, Prasad Vellanki prasad.vella...@oneconvergence.com
wrote:

 It seems like Option  1 would be preferable. User can use this right away.


People choosing Option 1 may think that the shortest route may be the best,
that said the drawback I identified is not to be dismissed either (and I am
sure there many more pros/cons): an immature product is of good use to
no-one, and we still have the nova parity that haunts us.

I think this could be another reason why people associated GBP and
nova-network parity in this thread: the fact that new abstractions are
introduced without solidifying the foundations of the project is a risk to
GBP as well as Neutron itself.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][QA] Enabling full neutron Job

2014-08-07 Thread Armando M.
Hi Salvatore,

I did notice the issue and I flagged this bug report:

https://bugs.launchpad.net/nova/+bug/1352141

I'll follow up.

Cheers,
Armando


On 7 August 2014 01:34, Salvatore Orlando sorla...@nicira.com wrote:

 I had to put the patch back on WIP because yesterday a bug causing a 100%
 failure rate slipped in.
 It should be an easy fix, and I'm already working on it.
 Situations like this, exemplified by [1] are a bit frustrating for all the
 people working on improving neutron quality.
 Now, if you allow me a little rant, as Neutron is receiving a lot of
 attention for all the ongoing discussion regarding this group policy stuff,
 would it be possible for us to receive a bit of attention to ensure both
 the full job and the grenade one are switched to voting before the juno-3
 review crunch.

 We've already had the attention of the QA team, it would probably good if
 we could get the attention of the infra core team to ensure:
 1) the jobs are also deemed by them stable enough to be switched to voting
 2) the relevant patches for openstack-infra/config are reviewed

 Regards,
 Salvatore

 [1]
 http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwie3UnbWVzc2FnZSc6IHUnRmxvYXRpbmcgaXAgcG9vbCBub3QgZm91bmQuJywgdSdjb2RlJzogNDAwfVwiIEFORCBidWlsZF9uYW1lOlwiY2hlY2stdGVtcGVzdC1kc3ZtLW5ldXRyb24tZnVsbFwiIEFORCBidWlsZF9icmFuY2g6XCJtYXN0ZXJcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiMTcyODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwNzQwMDExMDIwNywibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==


 On 23 July 2014 14:59, Matthew Treinish mtrein...@kortar.org wrote:

 On Wed, Jul 23, 2014 at 02:40:02PM +0200, Salvatore Orlando wrote:
  Here I am again bothering you with the state of the full job for
 Neutron.
 
  The patch for fixing an issue in nova's server external events extension
  merged yesterday [1]
  We do not have yet enough data points to make a reliable assessment,
 but of
  out 37 runs since the patch merged, we had only 5 failures, which puts
  the failure rate at about 13%
 
  This is ugly compared with the current failure rate of the smoketest
 (3%).
  However, I think it is good enough to start making the full job voting
 at
  least for neutron patches.
  Once we'll be able to bring down failure rate to anything around 5%, we
 can
  then enable the job everywhere.

 I think that sounds like a good plan. I'm also curious how the failure
 rates
 compare to the other non-neutron jobs, that might be a useful comparison
 too
 for deciding when to flip the switch everywhere.

 
  As much as I hate asymmetric gating, I think this is a good compromise
 for
  avoiding developers working on other projects are badly affected by the
  higher failure rate in the neutron full job.

 So we discussed this during the project meeting a couple of weeks ago [3]
 and
 there was a general agreement that doing it asymmetrically at first would
 be
 better. Everyone should be wary of the potential harms with doing it
 asymmetrically and I think priority will be given to fixing issues that
 block
 the neutron gate should they arise.

  I will therefore resume work on [2] and remove the WIP status as soon
 as I
  can confirm a failure rate below 15% with more data points.
 

 Thanks for keeping on top of this Salvatore. It'll be good to finally be
 at
 least partially gating with a parallel job.

 -Matt Treinish

 
  [1] https://review.openstack.org/#/c/103865/
  [2] https://review.openstack.org/#/c/88289/
 [3]
 http://eavesdrop.openstack.org/meetings/project/2014/project.2014-07-08-21.03.log.html#l-28

 
 
  On 10 July 2014 11:49, Salvatore Orlando sorla...@nicira.com wrote:
 
  
  
  
   On 10 July 2014 11:27, Ihar Hrachyshka ihrac...@redhat.com wrote:
  
   -BEGIN PGP SIGNED MESSAGE-
   Hash: SHA512
  
   On 10/07/14 11:07, Salvatore Orlando wrote:
The patch for bug 1329564 [1] merged about 11 hours ago. From [2]
it seems there has been an improvement on the failure rate, which
seem to have dropped to 25% from over 40%. Still, since the patch
merged there have been 11 failures already in the full job out of
42 jobs executed in total. Of these 11 failures: - 3 were due to
problems in the patches being tested - 1 had the same root cause as
bug 1329564. Indeed the related job started before the patch merged
but finished after. So this failure doesn't count. - 1 was for an
issue introduced about a week ago which actually causing a lot of
failures in the full job [3]. Fix should be easy for it; however
given the nature of the test we might even skip it while it's
fixed. - 3 were for bug 1333654 [4]; for this bug discussion is
going on on gerrit regarding the most suitable approach. - 3 were
for lock wait timeout errors. Several people in the community are
already working on them. I hope this will raise the profile of this
issue (maybe some might think it's just a corner case as it rarely
causes failures 

Re: [openstack-dev] [neutron] explanations on the current state of config file handling

2014-05-04 Thread Armando M.
If the consensus is to unify all the config options into a single
configuration file, I'd suggest following what the Nova folks did with
[1], which I think is what Salvatore was also hinted. This will also
help mitigate needless source code conflicts that would inevitably
arise when merging competing changes to the same file.

I personally do not like having a single file with gazillion options
(the same way I hate source files with gazillion LOC's but I digress
;), but I don't like a proliferation of config files either. So I
think what Mark suggested below makes sense.

Cheers,
Armando

[1] - 
https://github.com/openstack/nova/blob/master/etc/nova/README-nova.conf.txt

On 2 May 2014 07:09, Mark McClain mmccl...@yahoo-inc.com wrote:

 On May 2, 2014, at 7:39 AM, Sean Dague s...@dague.net wrote:

 Some non insignificant number of devstack changes related to neutron
 seem to be neutron plugins having to do all kinds of manipulation of
 extra config files. The grenade upgrade issue in neutron was because of
 some placement change on config files. Neutron seems to have *a ton* of
 config files and is extremely sensitive to their locations/naming, which
 also seems like it ends up in flux.

 We have grown in the number of configuration files and I do think some of the 
 design decisions made several years ago should probably be revisited.  One of 
 the drivers of multiple configuration files is the way that Neutron is 
 currently packaged [1][2].  We’re packaged significantly different than the 
 other projects so the thinking in the early years was that each 
 plugin/service since it was packaged separately needed its own config file.  
 This causes problems because often it involves changing the init script 
 invocation if the plugin is changed vs only changing the contents of the init 
 script.  I’d like to see Neutron changed to be a single package similar to 
 the way Cinder is packaged with the default config being ML2.


 Is there an overview somewhere to explain this design point?

 Sadly no.  It’s a historical convention that needs to be reconsidered.


 All the other services have a single config config file designation on
 startup, but neutron services seem to need a bunch of config files
 correct on the cli to function (see this process list from recent
 grenade run - http://paste.openstack.org/show/78430/ note you will have
 to horiz scroll for some of the neutron services).

 Mostly it would be good to understand this design point, and if it could
 be evolved back to the OpenStack norm of a single config file for the
 services.


 +1 to evolving into a more limited set of files.  The trick is how we 
 consolidate the agent, server, plugin and/or driver options or maybe we don’t 
 consolidate and use config-dir more.  In some cases, the files share a set of 
 common options and in other cases there are divergent options [3][4].   
 Outside of testing the agents are not installed on the same system as the 
 server, so we need to ensure that the agent configuration files should stand 
 alone.

 To throw something out, what if moved to using config-dir for optional 
 configs since it would still support plugin scoped configuration files.

 Neutron Servers/Network Nodes
 /etc/neutron.d
 neutron.conf  (Common Options)
 server.d (all plugin/service config files )
 service.d (all service config files)


 Hypervisor Agents
 /etc/neutron
 neutron.conf
 agent.d (Individual agent config files)


 The invocations would then be static:

 neutron-server —config-file /etc/neutron/neutron.conf —config-dir 
 /etc/neutron/server.d

 Service Agents:
 neutron-l3-agent —config-file /etc/neutron/neutron.conf —config-dir 
 /etc/neutron/service.d

 Hypervisors (assuming the consolidates L2 is finished this cycle):
 neutron-l2-agent —config-file /etc/neutron/neutron.conf —config-dir 
 /etc/neutron/agent.d

 Thoughts?

 mark

 [1] http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/epel-7/
 [2] 
 http://packages.ubuntu.com/search?keywords=neutronsearchon=namessuite=trustysection=all
 [3] 
 https://git.openstack.org/cgit/openstack/neutron/tree/etc/neutron/plugins/nuage/nuage_plugin.ini#n2
 [4]https://git.openstack.org/cgit/openstack/neutron/tree/etc/neutron/plugins/bigswitch/restproxy.ini#n3
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Proposed changes to core team

2014-05-21 Thread Armando M.
+1 from me too: Carl's contributions, code and reviews, have helped raise
the quality of this project.

Cheers,
Armando

On 21 May 2014 15:05, Maru Newby ma...@redhat.com wrote:

 On May 21, 2014, at 1:59 PM, Kyle Mestery mest...@noironetworks.com wrote:

 Neutron cores, please vote +1/-1 for the proposed addition of Carl
 Baldwin to Neutron core.

 +1 from me

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][group-based-policy] Should we revisit the priority of group-based policy?

2014-05-22 Thread Armando M.
I would second Maru's concerns, and I would also like to add the following:

We need to acknowledge the fact that there are certain architectural
aspects of Neutron as a project that need to be addressed; at the
summit we talked about the core refactoring, a task oriented API, etc.
To me these items have been neglected far too much over the past and
would need a higher priority and a lot more attention during the Juno
cycle. Being stretched as we are I wonder if dev/review cycles
wouldn't be better spent devoting more time to these efforts rather
than GP.

That said, I appreciate that GP is important and needs to move
forward, but at the same time I am thinking that there must be a
better way for addressing it and yet relieve some of the pressure that
GP complexity imposes to the Neutron team. One aspect it was discussed
at the summit was that the type of approach shown in [2] and [3]
below, was chosen because of lack of proper integration hooks...so I
am advocating: let's talk about those first before ruling them out in
favor of a monolithic approach that seems to violate some engineering
principles, like modularity and loose decoupling of system components.

I think we didn't have enough time during the summit to iron out some
of the concerns voiced here, and it seems like the IRC meeting for
Group Policy would not be the right venue to try and establish a
common ground among the people driving this effort and the rest of the
core team.

Shall we try and have an ad-hoc meeting and an ad-hoc agenda to find a
consensus?

Many thanks,
Armando

On 22 May 2014 11:38, Maru Newby ma...@redhat.com wrote:

 On May 22, 2014, at 11:03 AM, Maru Newby ma...@redhat.com wrote:

 At the summit session last week for group-based policy, there were many 
 concerns voiced about the approach being undertaken.  I think those concerns 
 deserve a wider audience, and I'm going to highlight some of them here.

 The primary concern seemed to be related to the complexity of the approach 
 implemented for the POC.  A number of session participants voiced concern 
 that the simpler approach documented in the original proposal [1] (described 
 in the section titled 'Policies applied between groups') had not been 
 implemented in addition to or instead of what appeared in the POC (described 
 in the section titled 'Policies applied as a group API').  The simpler 
 approach was considered by those participants as having the advantage of 
 clarity and immediate usefulness, whereas the complex approach was deemed 
 hard to understand and without immediate utility.

 A secondary but no less important concern is related to the impact on 
 Neutron of the approach implemented in the POC.  The POC was developed 
 monolithically, without oversight through gerrit, and the resulting patches 
 were excessive in size (~4700 [2] and ~1500 [3] lines).  Such large patches 
 are effectively impossible to review.  Even broken down into reviewable 
 chunks, though, it does not seem realistic to target juno-1 for merging this 
 kind of complexity.  The impact on stability could be considerable, and it 
 is questionable whether the necessary review effort should be devoted to 
 fast-tracking group-based policy at all, let alone an approach that is 
 considered by many to be unnecessarily complicated.

 The blueprint for group policy [4] is currently listed as a 'High' priority. 
  With the above concerns in mind, does it make sense to continue 
 prioritizing an effort that at present would seem to require considerably 
 more resources than the benefit it appears to promise?


 Maru

 1: https://etherpad.openstack.org/p/group-based-policy

 Apologies, this link is to the summit session etherpad.  The link to the 
 original proposal is:

 https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit

 2: https://review.openstack.org/93853
 3: https://review.openstack.org/93935
 4: 
 https://blueprints.launchpad.net/neutron/+spec/group-based-policy-abstraction

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][group-based-policy] Should we revisit the priority of group-based policy?

2014-05-22 Thread Armando M.
 up
 to writing the code, those comments are not going to help with solving the
 original problem. And this _is_ open-source. If you disagree, please write
 code and the community can decide for itself as to what model is actually
 simple to use for them. Curtailing efforts from other developers just
 because their engineering trade-offs are different from what you believe
 your use-case needs is not why we like open source. We enjoy the mode where
 different developers try different things, we experiment, and the software
 evolves to what the user demands. Or maybe, multiple models live in harmony.
 Let the users decide that.

 3. Re: Could dev/review cycles be better spent on refactoring
 I think that most people agree that policy control is an important feature
 that fundamentally improves neutron (by solving the automation and scale
 issues). In a large project, multiple sub-projects can, and for a healthy
 project should, work in parallel. I understand that the neutron core team is
 stretched. But we still need to be able to balance the needs of today
 (paying off the technical debt/existing-issues by doing refactoring) with
 needs of tomorrow (new features like GP and LBaaS). GP effort was started in
 Havana, and now we are trying to get this in Juno. I think that is
 reasonable and a long enough cycle for a high priority project to be able
 to get some core attention. Again I refer to LBaaS experience, as they
 struggled with very similar issues.

 4. Re: If refactored neutron was available, would a simpler option become
 more viable
 We would love to be able to answer that question. We have been trying to
 understand the refactoring work to understand this (see another ML thread)
 and we are open to understanding your position on that. We will call the
 ad-hoc meeting that you suggested and we would like to understand the
 refactoring work that might be reused for simpler policy implementation. At
 the same time, we would like to build on what is available today, and when
 the required refactored neutron becomes available (say Juno or K-release),
 we are more than happy to adapt to it at that time. Serializing all
 development around an effort that is still in inception phase is not a good
 solution. We are looking forward to participating in the core refactoring
 work, and based on the final spec that come up with, we would love to be
 able to eventually make the policy implementation simpler.

 Regards,
 Mandeep




 On Thu, May 22, 2014 at 11:44 AM, Armando M. arma...@gmail.com wrote:

 I would second Maru's concerns, and I would also like to add the
 following:

 We need to acknowledge the fact that there are certain architectural
 aspects of Neutron as a project that need to be addressed; at the
 summit we talked about the core refactoring, a task oriented API, etc.
 To me these items have been neglected far too much over the past and
 would need a higher priority and a lot more attention during the Juno
 cycle. Being stretched as we are I wonder if dev/review cycles
 wouldn't be better spent devoting more time to these efforts rather
 than GP.

 That said, I appreciate that GP is important and needs to move
 forward, but at the same time I am thinking that there must be a
 better way for addressing it and yet relieve some of the pressure that
 GP complexity imposes to the Neutron team. One aspect it was discussed
 at the summit was that the type of approach shown in [2] and [3]
 below, was chosen because of lack of proper integration hooks...so I
 am advocating: let's talk about those first before ruling them out in
 favor of a monolithic approach that seems to violate some engineering
 principles, like modularity and loose decoupling of system components.

 I think we didn't have enough time during the summit to iron out some
 of the concerns voiced here, and it seems like the IRC meeting for
 Group Policy would not be the right venue to try and establish a
 common ground among the people driving this effort and the rest of the
 core team.

 Shall we try and have an ad-hoc meeting and an ad-hoc agenda to find a
 consensus?

 Many thanks,
 Armando

 On 22 May 2014 11:38, Maru Newby ma...@redhat.com wrote:
 
  On May 22, 2014, at 11:03 AM, Maru Newby ma...@redhat.com wrote:
 
  At the summit session last week for group-based policy, there were many
  concerns voiced about the approach being undertaken.  I think those 
  concerns
  deserve a wider audience, and I'm going to highlight some of them here.
 
  The primary concern seemed to be related to the complexity of the
  approach implemented for the POC.  A number of session participants voiced
  concern that the simpler approach documented in the original proposal [1]
  (described in the section titled 'Policies applied between groups') had 
  not
  been implemented in addition to or instead of what appeared in the POC
  (described in the section titled 'Policies applied as a group API').  The
  simpler approach was considered by those

Re: [openstack-dev] [neutron][group-based-policy] Should we revisit the priority of group-based policy?

2014-05-23 Thread Armando M.
On 23 May 2014 12:31, Robert Kukura kuk...@noironetworks.com wrote:

 On 5/23/14, 12:46 AM, Mandeep Dhami wrote:

 Hi Armando:

 Those are good points. I will let Bob Kukura chime in on the specifics of
 how we intend to do that integration. But if what you see in the
 prototype/PoC was our final design for integration with Neutron core, I
 would be worried about that too. That specific part of the code
 (events/notifications for DHCP) was done in that way just for the prototype
 - to allow us to experiment with the part that was new and needed
 experimentation, the APIs and the model.

 That is the exact reason that we did not initially check the code to gerrit
 - so that we do not confuse the review process with the prototype process.
 But we were requested by other cores to check in even the prototype code as
 WIP patches to allow for review of the API parts. That can unfortunately
 create this very misunderstanding. For the review, I would recommend not the
 WIP patches, as they contain the prototype parts as well, but just the final
 patches that are not marked WIP. If you such issues in that part of the
 code, please DO raise that as that would be code that we intend to upstream.

 I believe Bob did discuss the specifics of this integration issue with you
 at the summit, but like I said it is best if he represents that side
 himself.

 Armando and Mandeep,

 Right, we do need a workable solution for the GBP driver to invoke neutron
 API operations, and this came up at the summit.

 We started out in the PoC directly calling the plugin, as is currently done
 when creating ports for agents. But this is not sufficient because the DHCP
 notifications, and I think the nova notifications, are needed for VM ports.
 We also really should be generating the other notifications, enforcing
 quotas, etc. for the neutron resources.

I am at loss here: if you say that you couldn't fit at the plugin
level, that is because it is the wrong level!! Sitting above it and
redo all the glue code around it to add DHCP notifications etc
continues the bad practice within the Neutron codebase where there is
not a good separation of concerns: for instance everything is cobbled
together like the DB and plugin logic. I appreciate that some design
decisions have been made in the past, but there's no good reason for a
nice new feature like GP to continue this bad practice; this is why I
feel strongly about the current approach being taken.


 We could just use python-neutronclient, but I think we'd prefer to avoid the
 overhead. The neutron project already depends on python-neutronclient for
 some tests, the debug facility, and the metaplugin, so in retrospect, we
 could have easily used it in the PoC.

I am not sure I understand what overhead you mean here. Could you
clarify? Actually looking at the code, I see a mind boggling set of
interactions going back and forth between the GP plugin, the policy
driver manager, the mapping driver and the core plugin: they are all
entangled together. For instance, when creating an endpoint the GP
plugin ends up calling the mapping driver that in turns ends up calls
the GP plugin itself! If this is not overhead I don't know what is!
The way the code has been structured makes it very difficult to read,
let alone maintain and extend with other policy mappers. The ML2-like
nature of the approach taken might work well in the context of core
plugin, mechanisms drivers etc, but I would argue that it poorly
applies to the context of GP.


 With the existing REST code, if we could find the
 neutron.api.v2.base.Controller class instance for each resource, we could
 simply call create(), update(), delete(), and show() on these. I didn't see
 an easy way to find these Controller instances, so I threw together some
 code similar to these Controller methods for the PoC. It probably wouldn't
 take too much work to have neutron.manager.NeutronManager provide access to
 the Controller classes if we want to go this route.

 The core refactoring effort may eventually provide a nice solution, but we
 can't wait for this. It seems we'll need to either use python-neutronclient
 or get access to the Controller classes in the meantime.

 Any thoughts on these? Any other ideas?

I am still not sure why do you even need to go all the way down to the
Controller class. After all it's almost like GP could be a service in
its own right that makes use of Neutron to map the application centric
abstractions on top of the networking constructs; this can happen via
the REST interface. I don't think there is a dependency on the core
refactoring here: the two can progress separately, so long as we break
the tie, from an implementation perspective, that GP and Core plugins
need to leave in the same address space. Am I missing something?
Because I still cannot justify why things have been coded the way they
have.

Thanks,
Armando


 Thanks,

 -Bob


 Regards,
 Mandeep




 ___
 

Re: [openstack-dev] [neutron][group-based-policy] GP mapping driver

2014-05-24 Thread Armando M.
On 24 May 2014 05:20, Robert Kukura kuk...@noironetworks.com wrote:

 On 5/23/14, 10:54 PM, Armando M. wrote:

 On 23 May 2014 12:31, Robert Kukura kuk...@noironetworks.com wrote:

 On 5/23/14, 12:46 AM, Mandeep Dhami wrote:

 Hi Armando:

 Those are good points. I will let Bob Kukura chime in on the specifics of
 how we intend to do that integration. But if what you see in the
 prototype/PoC was our final design for integration with Neutron core, I
 would be worried about that too. That specific part of the code
 (events/notifications for DHCP) was done in that way just for the
 prototype
 - to allow us to experiment with the part that was new and needed
 experimentation, the APIs and the model.

 That is the exact reason that we did not initially check the code to
 gerrit
 - so that we do not confuse the review process with the prototype
 process.
 But we were requested by other cores to check in even the prototype code
 as
 WIP patches to allow for review of the API parts. That can unfortunately
 create this very misunderstanding. For the review, I would recommend not
 the
 WIP patches, as they contain the prototype parts as well, but just the
 final
 patches that are not marked WIP. If you such issues in that part of the
 code, please DO raise that as that would be code that we intend to
 upstream.

 I believe Bob did discuss the specifics of this integration issue with
 you
 at the summit, but like I said it is best if he represents that side
 himself.

 Armando and Mandeep,

 Right, we do need a workable solution for the GBP driver to invoke
 neutron
 API operations, and this came up at the summit.

 We started out in the PoC directly calling the plugin, as is currently
 done
 when creating ports for agents. But this is not sufficient because the
 DHCP
 notifications, and I think the nova notifications, are needed for VM
 ports.
 We also really should be generating the other notifications, enforcing
 quotas, etc. for the neutron resources.

 I am at loss here: if you say that you couldn't fit at the plugin
 level, that is because it is the wrong level!! Sitting above it and
 redo all the glue code around it to add DHCP notifications etc
 continues the bad practice within the Neutron codebase where there is
 not a good separation of concerns: for instance everything is cobbled
 together like the DB and plugin logic. I appreciate that some design
 decisions have been made in the past, but there's no good reason for a
 nice new feature like GP to continue this bad practice; this is why I
 feel strongly about the current approach being taken.

 Armando, I am agreeing with you! The code you saw was a proof-of-concept
 implementation intended as a learning exercise, not something intended to be
 merged as-is to the neutron code base. The approach for invoking resources
 from the driver(s) will be revisited before the driver code is submitted for
 review.


 We could just use python-neutronclient, but I think we'd prefer to avoid
 the
 overhead. The neutron project already depends on python-neutronclient for
 some tests, the debug facility, and the metaplugin, so in retrospect, we
 could have easily used it in the PoC.

 I am not sure I understand what overhead you mean here. Could you
 clarify? Actually looking at the code, I see a mind boggling set of
 interactions going back and forth between the GP plugin, the policy
 driver manager, the mapping driver and the core plugin: they are all
 entangled together. For instance, when creating an endpoint the GP
 plugin ends up calling the mapping driver that in turns ends up calls
 the GP plugin itself! If this is not overhead I don't know what is!
 The way the code has been structured makes it very difficult to read,
 let alone maintain and extend with other policy mappers. The ML2-like
 nature of the approach taken might work well in the context of core
 plugin, mechanisms drivers etc, but I would argue that it poorly
 applies to the context of GP.

 The overhead of using python-neutronclient is that unnecessary
 serialization/deserialization are performed as well as socket communication
 through the kernel. This is all required between processes, but not within a
 single process. A well-defined and efficient mechanism to invoke resource
 APIs within the process, with the same semantics as incoming REST calls,
 seems like a generally useful addition to neutron. I'm hopeful the core
 refactoring effort will provide this (and am willing to help make sure it
 does), but we need something we can use until that is available.


I appreciate that there is a cost involved in relying on distributed
communication, but this must be negligible considered what needs to
happen end-to-end. If the overhead being referred here is the price to
pay for having a more dependable system (e.g. because things can be
scaled out and/or made reliable independently), then I think this is a
price worth paying.

I do hope that the core refactoring is not aiming at what you're
suggesting

Re: [openstack-dev] [neutron][group-based-policy] GP mapping driver

2014-05-26 Thread Armando M.
On May 26, 2014 4:27 PM, Mohammad Banikazemi m...@us.ibm.com wrote:

 Armando,

 I think there are a couple of things that are being mixed up here, at
least as I see this conversation :). The mapping driver is simply one way
of implementing GP. Ideally I would say, you do not need to implement the
GP in terms of other Neutron abstractions even though you may choose to do
so. A network controller could realize the connectivities and policies
defined by GP independent of say networks, and subnets. If we agree on this
point, then how we organize the code will be different than the case where
GP is always defined as something on top of current neutron API. In other
words, we shouldn't organize the overall code for GP based solely on the
use of the mapping driver.

The mapping driver is embedded in the policy framework that Bob had
initially proposed. If I understood what you're suggesting correctly, it
makes very little sense to diverge or come up with a different framework
alongside the legacy driver later on, otherwise we may end up in the same
state of the core plugins': monolithic vs ml2-based. Could you clarify?

 In the mapping driver (aka the legacy driver) for the PoC, GP is
implemented in terms of other Neutron abstractions. I agree that using
python-neutronclient for the PoC would be fine and as Bob has mentioned it
would have been probably the best/easiest way of having the PoC implemented
in the first place. The calls to python-neutronclient in my understanding
could be eventually easily replaced with direct calls after refactoring
which lead me to ask a question concerning the following part of the
conversation (being copied here again):

Not sure why we keep bringing this refactoring up: my point is that if GP
were to be implemented the way I'm suggesting the refactoring would have no
impact on GP...even if it did, replacing remote with direct calls should be
avoided IMO.



 [Bob:]

   The overhead of using python-neutronclient is that unnecessary
   serialization/deserialization are performed as well as socket
communication
   through the kernel. This is all required between processes, but not
within a
   single process. A well-defined and efficient mechanism to invoke
resource
   APIs within the process, with the same semantics as incoming REST
calls,
   seems like a generally useful addition to neutron. I'm hopeful the
core
   refactoring effort will provide this (and am willing to help make
sure it
   does), but we need something we can use until that is available.
  

 [Armando:]

  I appreciate that there is a cost involved in relying on distributed
  communication, but this must be negligible considered what needs to
  happen end-to-end. If the overhead being referred here is the price to
  pay for having a more dependable system (e.g. because things can be
  scaled out and/or made reliable independently), then I think this is a
  price worth paying.
 
  I do hope that the core refactoring is not aiming at what you're
  suggesting, as it sounds in exact opposition to some of the OpenStack
  design principles.


 From the summit sessions (in particular the session by Mark on
refactoring the core), I too was under the impression that there will be a
way of invoking Neutron API within the plugin with the same semantics as
through the REST API. Is this a misunderstanding?

That was not my understanding, but I'll let Mark chime in on this.

Many thanks
Armando

 Best,

 Mohammad







 Armando M. arma...@gmail.com wrote on 05/24/2014 01:36:35 PM:

  From: Armando M. arma...@gmail.com
  To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org,
  Date: 05/24/2014 01:38 PM
  Subject: Re: [openstack-dev] [neutron][group-based-policy] GP mapping
driver

 
  On 24 May 2014 05:20, Robert Kukura kuk...@noironetworks.com wrote:
  
   On 5/23/14, 10:54 PM, Armando M. wrote:
  
   On 23 May 2014 12:31, Robert Kukura kuk...@noironetworks.com wrote:
  
   On 5/23/14, 12:46 AM, Mandeep Dhami wrote:
  
   Hi Armando:
  
   Those are good points. I will let Bob Kukura chime in on the
specifics of
   how we intend to do that integration. But if what you see in the
   prototype/PoC was our final design for integration with Neutron
core, I
   would be worried about that too. That specific part of the code
   (events/notifications for DHCP) was done in that way just for the
   prototype
   - to allow us to experiment with the part that was new and needed
   experimentation, the APIs and the model.
  
   That is the exact reason that we did not initially check the code to
   gerrit
   - so that we do not confuse the review process with the prototype
   process.
   But we were requested by other cores to check in even the prototype
code
   as
   WIP patches to allow for review of the API parts. That can
unfortunately
   create this very misunderstanding. For the review, I would
recommend not
   the
   WIP patches, as they contain the prototype parts as well, but just

Re: [openstack-dev] [neutron][group-based-policy] GP mapping driver

2014-05-27 Thread Armando M.
Hi Mohammad,

Thanks, I understand now. I appreciate that the mapping driver is one way
of doing things and that the design has been familiarized for a while. I
wish I could follow infinite channels but unfortunately the openstack
information overload is astounding and sometimes I fail :) Gerrit is the
channel I strive to follow and this is when I saw the code for the first
time, hence my feedback.

It's worth noting that the PoC design document is (as it should be) very
high level and most of my feedback applies to the implementation decisions
being made. That said, I still have doubts that an ML2 like approach is
really necessary for GP and I welcome inputs to help me change my mind :)

Thanks
Armando
On May 27, 2014 5:04 PM, Mohammad Banikazemi m...@us.ibm.com wrote:

 Thanks for the continued interest in discussing Group Policy (GP). I
 believe these discussions with the larger Neutron community can benefit the
 GP work.

 GP like any other Neutron extension can have different implementations.
 Our idea has been to have the GP code organized similar to how ML2 and
 mechanism drivers are organized, with the possibility of having different
 drivers for realizing the GP API. One such driver (analogous to an ML2
 mechanism driver I would say) is the mapping driver that was implemented
 for the PoC. I certainly do not see it as the only implementation. The
 mapping driver is just the driver we used for our PoC implementation in
 order to gain experience in developing such a driver. Hope this clarifies
 things a bit.

 Please note that for better or worse we have produced several documents
 during the previous cycle. We have tried to collect them on the GP wiki
 page [1]. The latest design document [2] should give a broad view of the GP
 extension and the model being proposed. The PoC document [3] may clarify
 our PoC plans and where the mapping driver stands wrt other pieces of the
 work.  (Please note some parts of the plan as described in the PoC document
 was not implemented.)

 Hope my explanation and these documents (and other documents available on
 the GP wiki) are helpful.

 Best,

 Mohammad

 [1] https://wiki.openstack.org/wiki/Neutron/GroupPolicy   - GP wiki
 page
 [2]
 https://docs.google.com/presentation/d/1Nn1HjghAvk2RTPwvltSrnCUJkidWKWY2ckU7OYAVNpo/
- GP design document
 [3]
 https://docs.google.com/document/d/14UyvBkptmrxB9FsWEP8PEGv9kLqTQbsmlRxnqeF9Be8/
- GP PoC document


 [image: Inactive hide details for Armando M. ---05/26/2014 09:46:34
 PM---On May 26, 2014 4:27 PM, Mohammad Banikazemi m...@us.ibm.co]Armando
 M. ---05/26/2014 09:46:34 PM---On May 26, 2014 4:27 PM, Mohammad
 Banikazemi m...@us.ibm.com wrote: 

 From: Armando M. arma...@gmail.com
 To: OpenStack Development Mailing List, (not for usage questions) 
 openstack-dev@lists.openstack.org,
 Date: 05/26/2014 09:46 PM
 Subject: Re: [openstack-dev] [neutron][group-based-policy] GP mapping
 driver
 --




 On May 26, 2014 4:27 PM, Mohammad Banikazemi 
 *m...@us.ibm.com*m...@us.ibm.com
 wrote:
 
  Armando,
 
  I think there are a couple of things that are being mixed up here, at
 least as I see this conversation :). The mapping driver is simply one way
 of implementing GP. Ideally I would say, you do not need to implement the
 GP in terms of other Neutron abstractions even though you may choose to do
 so. A network controller could realize the connectivities and policies
 defined by GP independent of say networks, and subnets. If we agree on this
 point, then how we organize the code will be different than the case where
 GP is always defined as something on top of current neutron API. In other
 words, we shouldn't organize the overall code for GP based solely on the
 use of the mapping driver.

 The mapping driver is embedded in the policy framework that Bob had
 initially proposed. If I understood what you're suggesting correctly, it
 makes very little sense to diverge or come up with a different framework
 alongside the legacy driver later on, otherwise we may end up in the same
 state of the core plugins': monolithic vs ml2-based. Could you clarify?
 
  In the mapping driver (aka the legacy driver) for the PoC, GP is
 implemented in terms of other Neutron abstractions. I agree that using
 python-neutronclient for the PoC would be fine and as Bob has mentioned it
 would have been probably the best/easiest way of having the PoC implemented
 in the first place. The calls to python-neutronclient in my understanding
 could be eventually easily replaced with direct calls after refactoring
 which lead me to ask a question concerning the following part of the
 conversation (being copied here again):

 Not sure why we keep bringing this refactoring up: my point is that if GP
 were to be implemented the way I'm suggesting the refactoring would have no
 impact on GP...even if it did, replacing remote with direct calls should be
 avoided IMO.

 
 
  [Bob:]
 
The overhead of using python

Re: [openstack-dev] [neutron][L3] VM Scheduling v/s Network as input any consideration ?

2014-05-28 Thread Armando M.
Hi Keshava,

To the best of my knowledge Nova does not have an explicit way to determine
VM placements based on network attributes. That said, it does have a
general mechanism called host-aggregates [1] that can be leveraged to
address what you are looking for. How certain hosts are grouped together to
match certain network affinity rules is in the hands of the cloud operator
and I believe this requires quite a bit of out-of-band management.

I recall at one point that there was an effort going on to improve the
usability of such a use case (using the port binding extension [2]), but my
knowledge is not very current, so I'd need to fall back on some other folks
listening on the ML to chime in on the latter topic.

Hope this help!
Armando

[1] -
http://docs.openstack.org/trunk/openstack-ops/content/scaling.html#segregate_cloud
[2] -
http://docs.openstack.org/api/openstack-network/2.0/content/binding_ext_ports.html


On 27 May 2014 09:53, A, Keshava keshav...@hp.com wrote:

  Hi,

 I have one of the basic question about the Nova Scheduler in the following
 below scenario.

 Whenever a new VM to be hosted is there any consideration of network
 attributes ?

 Example let us say all the VMs with 10.1.x is under TOR-1, and 20.1.xy are
 under TOR-2.

 A new CN nodes is inserted under TOR-2 and at same time a new  tenant VM
 needs to be  hosted for 10.1.xa network.



 Then is it possible to mandate the new VM(10.1.xa)   to hosted under TOR-1
 instead of it got scheduled under TOR-2 ( where there CN-23 is completely
 free from resource perspective ) ?

 This is required to achieve prefix/route aggregation and to avoid network
 broadcast (incase if they are scattered across different TOR/Switch) ?









 Thanks  regards,

 Keshava.A



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Supporting retries in neutronclient

2014-05-29 Thread Armando M.
Hi Paul,

Just out of curiosity, I am assuming you are using the client that
still relies on httplib2. Patch [1] replaced httplib2 with requests,
but I believe that a new client that incorporates this change has not
yet been published. I wonder if the failures you are referring to
manifest themselves with the former http library rather than the
latter. Could you clarify?

Thanks,
Armando

[1] - https://review.openstack.org/#/c/89879/

On 29 May 2014 17:25, Paul Ward wpw...@us.ibm.com wrote:
 Well, for my specific error, it was an intermittent ssl handshake error
 before the request was ever sent to the
 neutron-server.  In our case, we saw that 4 out of 5 resize operations
 worked, the fifth failed with this ssl
 handshake error in neutronclient.

 I certainly think a GET is safe to retry, and I agree with your statement
 that PUTs and DELETEs probably
 are as well.  This still leaves a change in nova needing to be made to
 actually a) specify a conf option and
 b) pass it to neutronclient where appropriate.


 Aaron Rosen aaronoro...@gmail.com wrote on 05/28/2014 07:38:56 PM:

 From: Aaron Rosen aaronoro...@gmail.com


 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org,
 Date: 05/28/2014 07:44 PM

 Subject: Re: [openstack-dev] [neutron] Supporting retries in neutronclient

 Hi,

 I'm curious if other openstack clients implement this type of retry
 thing. I think retrying on GET/DELETES/PUT's should probably be okay.

 What types of errors do you see in the neutron-server when it fails
 to respond? I think it would be better to move the retry logic into
 the server around the failures rather than the client (or better yet
 if we fixed the server :)). Most of the times I've seen this type of
 failure is due to deadlock errors caused between (sqlalchemy and
 eventlet *i think*) which cause the client to eventually timeout.

 Best,

 Aaron


 On Wed, May 28, 2014 at 11:51 AM, Paul Ward wpw...@us.ibm.com wrote:
 Would it be feasible to make the retry logic only apply to read-only
 operations?  This would still require a nova change to specify the
 number of retries, but it'd also prevent invokers from shooting
 themselves in the foot if they call for a write operation.



 Aaron Rosen aaronoro...@gmail.com wrote on 05/27/2014 09:40:00 PM:

  From: Aaron Rosen aaronoro...@gmail.com

  To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org,
  Date: 05/27/2014 09:44 PM

  Subject: Re: [openstack-dev] [neutron] Supporting retries in
  neutronclient
 
  Hi,

 
  Is it possible to detect when the ssl handshaking error occurs on
  the client side (and only retry for that)? If so I think we should
  do that rather than retrying multiple times. The danger here is
  mostly for POST operations (as Eugene pointed out) where it's
  possible for the response to not make it back to the client and for
  the operation to actually succeed.
 
  Having this retry logic nested in the client also prevents things
  like nova from handling these types of failures individually since
  this retry logic is happening inside of the client. I think it would
  be better not to have this internal mechanism in the client and
  instead make the user of the client implement retry so they are
  aware of failures.
 
  Aaron
 

  On Tue, May 27, 2014 at 10:48 AM, Paul Ward wpw...@us.ibm.com wrote:
  Currently, neutronclient is hardcoded to only try a request once in
  retry_request by virtue of the fact that it uses self.retries as the
  retry count, and that's initialized to 0 and never changed.  We've
  seen an issue where we get an ssl handshaking error intermittently
  (seems like more of an ssl bug) and a retry would probably have
  worked.  Yet, since neutronclient only tries once and gives up, it
  fails the entire operation.  Here is the code in question:
 
  https://github.com/openstack/python-neutronclient/blob/master/
  neutronclient/v2_0/client.py#L1296
 
  Does anybody know if there's some explicit reason we don't currently
  allow configuring the number of retries?  If not, I'm inclined to
  propose a change for just that.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 

Re: [openstack-dev] [neutron] Supporting retries in neutronclient

2014-05-29 Thread Armando M.
mishandling of SSL was the very reason why I brought that change
forward; so I wouldn't rule it out completely ;)

A.

On 29 May 2014 19:15, Paul Ward wpw...@us.ibm.com wrote:
 Yes, we're still on a code level that uses httplib2.  I noticed that as
 well, but wasn't sure if that would really
 help here as it seems like an ssl thing itself.  But... who knows??  I'm not
 sure how consistently we can
 recreate this, but if we can, I'll try using that patch to use requests and
 see if that helps.



 Armando M. arma...@gmail.com wrote on 05/29/2014 11:52:34 AM:

 From: Armando M. arma...@gmail.com


 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org,
 Date: 05/29/2014 11:58 AM

 Subject: Re: [openstack-dev] [neutron] Supporting retries in neutronclient

 Hi Paul,

 Just out of curiosity, I am assuming you are using the client that
 still relies on httplib2. Patch [1] replaced httplib2 with requests,
 but I believe that a new client that incorporates this change has not
 yet been published. I wonder if the failures you are referring to
 manifest themselves with the former http library rather than the
 latter. Could you clarify?

 Thanks,
 Armando

 [1] - https://review.openstack.org/#/c/89879/

 On 29 May 2014 17:25, Paul Ward wpw...@us.ibm.com wrote:
  Well, for my specific error, it was an intermittent ssl handshake error
  before the request was ever sent to the
  neutron-server.  In our case, we saw that 4 out of 5 resize operations
  worked, the fifth failed with this ssl
  handshake error in neutronclient.
 
  I certainly think a GET is safe to retry, and I agree with your
  statement
  that PUTs and DELETEs probably
  are as well.  This still leaves a change in nova needing to be made to
  actually a) specify a conf option and
  b) pass it to neutronclient where appropriate.
 
 
  Aaron Rosen aaronoro...@gmail.com wrote on 05/28/2014 07:38:56 PM:
 
  From: Aaron Rosen aaronoro...@gmail.com
 
 
  To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org,
  Date: 05/28/2014 07:44 PM
 
  Subject: Re: [openstack-dev] [neutron] Supporting retries in
  neutronclient
 
  Hi,
 
  I'm curious if other openstack clients implement this type of retry
  thing. I think retrying on GET/DELETES/PUT's should probably be okay.
 
  What types of errors do you see in the neutron-server when it fails
  to respond? I think it would be better to move the retry logic into
  the server around the failures rather than the client (or better yet
  if we fixed the server :)). Most of the times I've seen this type of
  failure is due to deadlock errors caused between (sqlalchemy and
  eventlet *i think*) which cause the client to eventually timeout.
 
  Best,
 
  Aaron
 
 
  On Wed, May 28, 2014 at 11:51 AM, Paul Ward wpw...@us.ibm.com wrote:
  Would it be feasible to make the retry logic only apply to read-only
  operations?  This would still require a nova change to specify the
  number of retries, but it'd also prevent invokers from shooting
  themselves in the foot if they call for a write operation.
 
 
 
  Aaron Rosen aaronoro...@gmail.com wrote on 05/27/2014 09:40:00 PM:
 
   From: Aaron Rosen aaronoro...@gmail.com
 
   To: OpenStack Development Mailing List (not for usage questions)
   openstack-dev@lists.openstack.org,
   Date: 05/27/2014 09:44 PM
 
   Subject: Re: [openstack-dev] [neutron] Supporting retries in
   neutronclient
  
   Hi,
 
  
   Is it possible to detect when the ssl handshaking error occurs on
   the client side (and only retry for that)? If so I think we should
   do that rather than retrying multiple times. The danger here is
   mostly for POST operations (as Eugene pointed out) where it's
   possible for the response to not make it back to the client and for
   the operation to actually succeed.
  
   Having this retry logic nested in the client also prevents things
   like nova from handling these types of failures individually since
   this retry logic is happening inside of the client. I think it would
   be better not to have this internal mechanism in the client and
   instead make the user of the client implement retry so they are
   aware of failures.
  
   Aaron
  
 
   On Tue, May 27, 2014 at 10:48 AM, Paul Ward wpw...@us.ibm.com
   wrote:
   Currently, neutronclient is hardcoded to only try a request once in
   retry_request by virtue of the fact that it uses self.retries as the
   retry count, and that's initialized to 0 and never changed.  We've
   seen an issue where we get an ssl handshaking error intermittently
   (seems like more of an ssl bug) and a retry would probably have
   worked.  Yet, since neutronclient only tries once and gives up, it
   fails the entire operation.  Here is the code in question:
  
   https://github.com/openstack/python-neutronclient/blob/master/
   neutronclient/v2_0/client.py#L1296
  
   Does anybody know if there's some explicit reason we don't currently
   allow

Re: [openstack-dev] [Neutron] IPv6 bug fixes that would be nice to have in Juno

2014-10-03 Thread Armando M.
I have all of these bugs on my radar, and I want to fast track them
for merging in the next few days.

Please tag the bug reports with 'juno-rc-potential'.

For each of them we can discuss the loss of functionality they cause.
If no workaround can be found, we should definitely cut an RC2.

Armando

On 3 October 2014 12:21, Collins, Sean sean_colli...@cable.comcast.com wrote:
 On Fri, Oct 03, 2014 at 02:58:36PM EDT, Henry Gessau wrote:
 There are some fixes for IPv6 bugs that unfortunately missed the RC1 cut.
 These bugs are quite important for IPv6 users and therefore I would like to
 lobby for getting them into a possible RC2 of Neutron Juno.

 Henry and I spoke about these bugs, and I agree with his assessment. +1!
 --
 Sean M. Collins
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] HA of dhcp agents?

2014-10-21 Thread Armando M.
As far as I can tell when you specify:

dhcp_agents_per_network = X  1

The server binds the network to all the agents (up to X), which means that
you have multiple instances of dnsmasq serving dhcp requests at the same
time. If one agent dies, there is no fail-over needed per se, as the other
agent will continue to server dhcp requests unaffected.

For instance, in my env I have dhcp_agents_per_network=2, so If I create a
network, and list the agents serving the network I will see the following:

neutron dhcp-agent-list-hosting-net test

+--+++---+

| id   | host   | admin_state_up | alive |

+--+++---+

| 6dd09649-5e24-403b-9654-7aa0f69f04fb | host1  | True   | :-)   |

| 7d47721a-2725-45f8-b7c4-2731cfabdb48 | host2  | True   | :-)   |

+--+++---+

Isn't that what you're after?

Cheers,
Armando

On 21 October 2014 22:26, Noel Burton-Krahn n...@pistoncloud.com wrote:

 We currently have a mechanism for restarting the DHCP agent on another
 node, but we'd like the new agent to take over all the old networks of the
 failed dhcp instance.  Right now, since dhcp agents are distinguished by
 host, and the host has to match the host of the ovs agent, and the ovs
 agent's host has to be unique per node, the new dhcp agent is registered as
 a completely new agent and doesn't take over the failed agent's networks.
 I'm looking for a way to give the new agent the same roles as the previous
 one.

 --
 Noel


 On Tue, Oct 21, 2014 at 12:12 AM, Kevin Benton blak...@gmail.com wrote:

 No, unfortunately when the DHCP agent dies there isn't automatic
 rescheduling at the moment.

 On Mon, Oct 20, 2014 at 11:56 PM, Noel Burton-Krahn n...@pistoncloud.com
  wrote:

 Thanks for the pointer!

 I like how the first google hit for this is:

 Add details on dhcp_agents_per_network option for DHCP agent HA
 https://bugs.launchpad.net/openstack-manuals/+bug/1370934

 :) Seems reasonable to set dhcp_agents_per_network  1.  What happens
 when a DHCP agent dies?  Does the scheduler automatically bind another
 agent to that network?

 Cheers,
 --
 Noel



 On Mon, Oct 20, 2014 at 9:03 PM, Jian Wen wenjia...@gmail.com wrote:

 See dhcp_agents_per_network in neutron.conf.

 https://bugs.launchpad.net/neutron/+bug/1174132

 2014-10-21 6:47 GMT+08:00 Noel Burton-Krahn n...@pistoncloud.com:

 I've been working on failover for dhcp and L3 agents.  I see that in
 [1], multiple dhcp agents can host the same network.  However, it looks
 like I have to manually assign networks to multiple dhcp agents, which
 won't work.  Shouldn't multiple dhcp agents automatically fail over?

 [1]
 http://docs.openstack.org/trunk/config-reference/content/multi_agent_demo_configuration.html



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Best,

 Jian

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] metadata agent performance

2014-10-21 Thread Armando M.
It sounds like the only reasonable option we are left with right now is to
document.

Even if we enabled/removed the backport, it would take time until users can
get their hands on a new cut of the stable branch.

We would need to be more diligent in the future and limit backports to just
bug fixes to prevent situations like this from occurring, or maybe we need
to have better testing...um...definitely the latter :)

My 2c
Armando

On 22 October 2014 05:56, Maru Newby ma...@redhat.com wrote:

 We merged caching support for the metadata agent in juno, and backported
 to icehouse.  It was enabled by default in juno, but disabled by default in
 icehouse to satisfy the stable maint requirement of not changing functional
 behavior.

 While performance of the agent was improved with caching enabled, it
 regressed a reported 8x when caching was disabled [1].  This means that by
 default, the caching backport severely impacts icehouse Neutron's
 performance.

 So, what is the way forward?  We definitely need to document the problem
 for both icehouse and juno.  Is documentation enough?  Or can we enable
 caching by default in icehouse?  Or remove the backport entirely.

 There is also a proposal to replace the metadata agent’s use of the
 neutron client in favor of rpc [2].  There were comments on an old bug
 suggesting we didn’t want to do this [3], but assuming that we want this
 change in Kilo, is backporting even a possibility given that it implies a
 behavioral change to be useful?

 Thanks,


 Maru



 1: https://bugs.launchpad.net/cloud-archive/+bug/1361357
 2: https://review.openstack.org/#/c/121782
 3: https://bugs.launchpad.net/neutron/+bug/1092043
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] HA of dhcp agents?

2014-10-22 Thread Armando M.
Hi Noel,

On 22 October 2014 01:57, Noel Burton-Krahn n...@pistoncloud.com wrote:

 Hi Armando,

 Sort of... but what happens when the second one dies?


You mean, you lost both (all) agents? In this case, yes you'd need to
resurrect the agents or move the networks to another available agent.


 If one DHCP agent dies, I need to be able to start a new DHCP agent on
 another host and take over from it.  As far as I can tell right now, when
 one DHCP agent dies, another doesn't take up the slack.


I am not sure I fully understand the failure mode you are trying to
address. The DHCP agents can work in an active-active configuration, so if
you have N agents assigned per network, all of them should be able to
address DHCP traffic. If this is not your experience, ie. one agent dies
and DHCP is no longer served on the network by any other agent, then there
might be some other problem going on.




 I have the same problem wit L3 agents by the way, that's next on my list

 --
 Noel


 On Tue, Oct 21, 2014 at 12:52 PM, Armando M. arma...@gmail.com wrote:

 As far as I can tell when you specify:

 dhcp_agents_per_network = X  1

 The server binds the network to all the agents (up to X), which means
 that you have multiple instances of dnsmasq serving dhcp requests at the
 same time. If one agent dies, there is no fail-over needed per se, as the
 other agent will continue to server dhcp requests unaffected.

 For instance, in my env I have dhcp_agents_per_network=2, so If I create
 a network, and list the agents serving the network I will see the following:

 neutron dhcp-agent-list-hosting-net test

 +--+++---+

 | id   | host   | admin_state_up | alive |

 +--+++---+

 | 6dd09649-5e24-403b-9654-7aa0f69f04fb | host1  | True   | :-)   |

 | 7d47721a-2725-45f8-b7c4-2731cfabdb48 | host2  | True   | :-)   |

 +--+++---+

 Isn't that what you're after?

 Cheers,
 Armando

 On 21 October 2014 22:26, Noel Burton-Krahn n...@pistoncloud.com wrote:

 We currently have a mechanism for restarting the DHCP agent on another
 node, but we'd like the new agent to take over all the old networks of the
 failed dhcp instance.  Right now, since dhcp agents are distinguished by
 host, and the host has to match the host of the ovs agent, and the ovs
 agent's host has to be unique per node, the new dhcp agent is registered as
 a completely new agent and doesn't take over the failed agent's networks.
 I'm looking for a way to give the new agent the same roles as the previous
 one.

 --
 Noel


 On Tue, Oct 21, 2014 at 12:12 AM, Kevin Benton blak...@gmail.com
 wrote:

 No, unfortunately when the DHCP agent dies there isn't automatic
 rescheduling at the moment.

 On Mon, Oct 20, 2014 at 11:56 PM, Noel Burton-Krahn 
 n...@pistoncloud.com wrote:

 Thanks for the pointer!

 I like how the first google hit for this is:

 Add details on dhcp_agents_per_network option for DHCP agent HA
 https://bugs.launchpad.net/openstack-manuals/+bug/1370934

 :) Seems reasonable to set dhcp_agents_per_network  1.  What happens
 when a DHCP agent dies?  Does the scheduler automatically bind another
 agent to that network?

 Cheers,
 --
 Noel



 On Mon, Oct 20, 2014 at 9:03 PM, Jian Wen wenjia...@gmail.com wrote:

 See dhcp_agents_per_network in neutron.conf.

 https://bugs.launchpad.net/neutron/+bug/1174132

 2014-10-21 6:47 GMT+08:00 Noel Burton-Krahn n...@pistoncloud.com:

 I've been working on failover for dhcp and L3 agents.  I see that in
 [1], multiple dhcp agents can host the same network.  However, it looks
 like I have to manually assign networks to multiple dhcp agents, which
 won't work.  Shouldn't multiple dhcp agents automatically fail over?

 [1]
 http://docs.openstack.org/trunk/config-reference/content/multi_agent_demo_configuration.html



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Best,

 Jian

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-28 Thread Armando M.
Sorry for jumping into this thread late...there's lots of details to
process, and I needed time to digest!

Having said that, I'd like to recap before moving the discussion forward,
at the Summit and beyond.

As it's being pointed out, there are a few efforts targeting this area; I
think that is sensible to adopt the latest spec system we have been using
to understand where we are, and I mean Gerrit and the spec submissions.

To this aim I see the following specs:

https://review.openstack.org/93613 - Service API for L2 bridging
tenants/provider networks
https://review.openstack.org/100278 - API Extension for l2-gateway
https://review.openstack.org/94612 - VLAN aware VMs
https://review.openstack.org/97714 - VLAN trunking networks for NFV

First of all: did I miss any? I am intentionally leaving out any vendor
specific blueprint for now.

When I look at these I clearly see that we jump all the way to
implementations details. From an architectural point of view, this clearly
does not make a lot of sense.

In order to ensure that everyone is on the same page, I would suggest to
have a discussion where we focus on the following aspects:

- Identify the use cases: what are, in simple terms, the possible
interactions that an actor (i.e. the tenant or the admin) can have with the
system (an OpenStack deployment), when these NFV-enabling capabilities are
available? What are the observed outcomes once these interactions have
taken place?

-  Management API: what abstractions do we expose to the tenant or admin
(do we augment the existing resources, or do we create new resources, or do
we do both)? This should obviously driven by a set of use cases, and we
need to identify the minimum set or logical artifacts that would let us
meet the needs of the widest set of use cases.

- Core Neutron changes: what needs to happen to the core of Neutron, if
anything, so that we can implement this NFV-enabling constructs
successfully? Are there any changes to the core L2 API? Are there any
changes required to the core framework (scheduling, policy, notifications,
data model etc)?

- Add support to the existing plugin backends: the openvswitch reference
implementation is an obvious candidate, but other plugins may want to
leverage the newly defined capabilities too. Once the above mentioned
points have been fleshed out, it should be fairly straightforward to have
these efforts progress in autonomy.

IMO, until we can get a full understanding of the aspects above, I don't
believe like the core team is in the best position to determine the best
approach forward; I think it's in everyone's interest in making sure that
something cohesive comes out of this; the worst possible outcome is no
progress at all, or even worse, some frankenstein system that no-one really
know what it does, or how it can be used.

I will go over the specs one more time in order to identify some answers to
my points above. I hope someone can help me through the process.


Many thanks,
Armando
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Clear all flows when ovs agent start? why and how avoid?

2014-10-29 Thread Armando M.
I must admit I haven't digged up too much, but this might also look
suspicious:

https://review.openstack.org/#/c/96782/

Perhaps it's a combination of both? :)

On 29 October 2014 08:17, Kyle Mestery mest...@mestery.com wrote:

 On Wed, Oct 29, 2014 at 7:25 AM, Hly henry4...@gmail.com wrote:
 
 
  Sent from my iPad
 
  On 2014-10-29, at 下午8:01, Robert van Leeuwen 
 robert.vanleeu...@spilgames.com wrote:
 
  I find our current design is remove all flows then add flow by entry,
 this
  will cause every network node will break off all tunnels between other
  network node and all compute node.
  Perhaps a way around this would be to add a flag on agent startup
  which would have it skip reprogramming flows. This could be used for
  the upgrade case.
 
  I hit the same issue last week and filed a bug here:
  https://bugs.launchpad.net/neutron/+bug/1383674
 
  From an operators perspective this is VERY annoying since you also
 cannot push any config changes that requires/triggers a restart of the
 agent.
  e.g. something simple like changing a log setting becomes a hassle.
  I would prefer the default behaviour to be to not clear the flows or at
 the least an config option to disable it.
 
 
  +1, we also suffered from this even when a very little patch is done
 
 I'd really like to get some input from the tripleo folks, because they
 were the ones who filed the original bug here and were hit by the
 agent NOT reprogramming flows on agent restart. It does seem fairly
 obvious that adding an option around this would be a good way forward,
 however.

 Thanks,
 Kyle

 
  Cheers,
  Robert van Leeuwen
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Improving dhcp agent scheduling interface

2014-11-05 Thread Armando M.
Hi Eugene thanks for bringing this up for discussion. My comments inline.
Thanks,
Armando

On 5 November 2014 12:07, Eugene Nikanorov enikano...@mirantis.com wrote:

 Hi folks,

 I'd like to raise a discussion kept in irc and in gerrit recently:
 https://review.openstack.org/#/c/131944/

 The intention of the patch is to clean up particular scheduling
 method/interface:
 schedule_network.

 Let me clarify why I think it needs to be done (beside code api
 consistency reasons):
 Scheduling process is ultimately just a two steps:
 1) choosing appropriate agent for the network
 2) adding binding between the agent and the network
 To perform those two steps one doesn't need network object, network_id is
 satisfactory for this need.


I would argue that it isn't, actually.

You may need to know the state of the network to make that placement
decision. Just passing the id may cause the scheduling logic to issue an
extra DB query that can be easily avoided if the right interface between
the caller of a scheduler and the scheduler itself was in place. For
instance we cannot fix [1] (as you pointed out) today because the method
only accepts a dict that holds just a partial representation of the
network. If we had the entire DB object we would avoid that and just
passing the id is going in the opposite direction IMO


 However, there is a concern, that having full dict (or full network
 object) could allow us to do more flexible things in step 1 like deciding,
 whether network should be scheduled at all.


That's the whole point of scheduling, is it not? If you are arguing that we
should split the schedule method into two separate steps
(get_me_available_agent and bind_network_to_agent), and make the caller of
the schedule method carry out the two step process by itself, I think it
could be worth exploring that, but at this point I don't believe this is
the right refactoring.


 See the TODO for the reference:


[1]



 https://github.com/openstack/neutron/blob/master/neutron/scheduler/dhcp_agent_scheduler.py#L64

 However, this just puts an unnecessary (and actually, incorrect)
 requirement on the caller, to provide the network dict, mainly because
 caller doesn't know what content of the dict the callee (scheduler driver)
 expects.


Why is it incorrect? We should move away from dictionaries and passing
objects so that they can be reused where it makes sense without incurring
in the overhead of re-fetching the object associated to the uuid when
needed. We can even hide the complexity of refreshing the copy of the
object every time it is accessed if needed. With information hiding and
encapsulation we can wrap this logic in one place without scattering it
around everywhere in the code base, like it's done today.


 Currently scheduler is only interested in ID, if there is another
 scheduling driver,


No, the scheduler needs to know about the state of the network to do proper
placement. It's a side-effect of the default scheduling (i.e. random). If
we want to do more intelligent placement we need the state of the network.


 it may now require additional parameters (like list of full subnet dicts)
 in the dict which may or may not be provided by the calling code.
 Instead of making assumptions about what is in the dict, it's better to go
 with simpler and clearer interface that will allow scheduling driver to do
 whatever makes sense to it. In other words: caller provides id, driver
 fetches everything it
 needs using the id. For existing scheduling drivers it's a no-op.


Again, the problem lies with the fact that we're passing dictionaries
around.



 I think l3 scheduling is an example of interface done in the more right
 way; to me it looks clearer and more consistent.


I may argue that the l3 scheduling api is the bad example for the above
mentioned reasons.



 Thanks,
 Eugene.


At this point I am still not convinced by the arguments provided that the
patch 131944 https://review.openstack.org/#/c/131944/ should go forward
as it is.





 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][TripleO] Clear all flows when ovs agent start? why and how avoid?

2014-11-05 Thread Armando M.
I would be open to making this toggle switch available, however I feel that
doing it via static configuration can introduce unnecessary burden to the
operator. Perhaps we could explore a way where the agent can figure which
state it's supposed to be in based on its reported status?

Armando

On 5 November 2014 12:09, Salvatore Orlando sorla...@nicira.com wrote:

 I have no opposition to that, and I will be happy to assist reviewing the
 code that will enable flow synchronisation  (or to say it in an easier way,
 punctual removal of flows unknown to the l2 agent).

 In the meanwhile, I hope you won't mind if we go ahead and start making
 flow reset optional - so that we stop causing downtime upon agent restart.

 Salvatore

 On 5 November 2014 11:57, Erik Moe erik@ericsson.com wrote:



 Hi,



 I also agree, IMHO we need flow synchronization method so we can avoid
 network downtime and stray flows.



 Regards,

 Erik





 *From:* Germy Lure [mailto:germy.l...@gmail.com]
 *Sent:* den 5 november 2014 10:46
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [neutron][TripleO] Clear all flows when
 ovs agent start? why and how avoid?



 Hi Salvatore,

 A startup flag is really a simpler approach. But in what situation we
 should set this flag to remove all flows? upgrade? restart manually?
 internal fault?



 Indeed, only at the time that there are inconsistent(incorrect, unwanted,
 stable and so on) flows between agent and the ovs related, we need refresh
 flows. But the problem is how we know this? I think a startup flag is too
 rough, unless we can tolerate the inconsistent situation.



 Of course, I believe that turn off startup reset flows action can resolve
 most problem. The flows are correct most time after all. But considering
 NFV 5 9s, I still recommend flow synchronization approach.



 BR,

 Germy



 On Wed, Nov 5, 2014 at 3:36 PM, Salvatore Orlando sorla...@nicira.com
 wrote:

 From what I gather from this thread and related bug report, the change
 introduced in the OVS agent is causing a data plane outage upon agent
 restart, which is not desirable in most cases.



 The rationale for the change that introduced this bug was, I believe,
 cleaning up stale flows on the OVS agent, which also makes some sense.



 Unless I'm missing something, I reckon the best way forward is actually
 quite straightforward; we might add a startup flag to reset all flows and
 not reset them by default.

 While I agree the flow synchronisation process proposed in the previous
 post is valuable too, I hope we might be able to fix this with a simpler
 approach.



 Salvatore



 On 5 November 2014 04:43, Germy Lure germy.l...@gmail.com wrote:

 Hi,



 Consider the triggering of restart agent, I think it's nothing but:

 1). only restart agent

 2). reboot the host that agent deployed on



 When the agent started, the ovs may:

 a.have all correct flows

 b.have nothing at all

 c.have partly correct flows, the others may need to be reprogrammed,
 deleted or added



 In any case, I think both user and developer would happy to see that the
 system recovery ASAP after agent restarting. The best is agent only push
 those incorrect flows, but keep the correct ones. This can ensure those
 business with correct flows working during agent starting.



 So, I suggest two solutions:

 1.Agent gets all flows from ovs and compare with its local flows after
 restarting. And agent only corrects the different ones.

 2.Adapt ovs and agent. Agent just push all(not remove) flows every time
 and ovs prepares two tables for flows switch(like RCU lock).



 1 is recommended because of the 3rd vendors.



 BR,

 Germy





 On Fri, Oct 31, 2014 at 10:28 PM, Ben Nemec openst...@nemebean.com
 wrote:

 On 10/29/2014 10:17 AM, Kyle Mestery wrote:
  On Wed, Oct 29, 2014 at 7:25 AM, Hly henry4...@gmail.com wrote:
 
 
  Sent from my iPad
 
  On 2014-10-29, at 下午8:01, Robert van Leeuwen 
 robert.vanleeu...@spilgames.com wrote:
 
  I find our current design is remove all flows then add flow by
 entry, this
  will cause every network node will break off all tunnels between
 other
  network node and all compute node.
  Perhaps a way around this would be to add a flag on agent startup
  which would have it skip reprogramming flows. This could be used for
  the upgrade case.
 
  I hit the same issue last week and filed a bug here:
  https://bugs.launchpad.net/neutron/+bug/1383674
 
  From an operators perspective this is VERY annoying since you also
 cannot push any config changes that requires/triggers a restart of the
 agent.
  e.g. something simple like changing a log setting becomes a hassle.
  I would prefer the default behaviour to be to not clear the flows or
 at the least an config option to disable it.
 
 
  +1, we also suffered from this even when a very little patch is done
 
  I'd really like to get some input from the tripleo folks, because they
  were the ones who filed the 

[openstack-dev] Fw: [neutron] social event

2014-11-06 Thread Armando M.
I have just realized that I should have cross-reference this mail on both
ML's. Same message for the dev mailing list.

Thanks,
Armando

On 6 November 2014 00:32, Armando M. arma...@gmail.com wrote:

 Hi there,

 I know this may be somewhat short notice, but a few of us have wondered if
 we should continue the tradition of having a social gathering of Neutron
 folks to have a few drinks and talk about work in a slightly less boring
 setting.

 I was looking at:

 https://plus.google.com/+PlayOffWagramParis/about?hl=en

 It seems close enough to the conference venue, and spacious enough to hold
 a dozen of people or so. I would suggest we go over there right after the
 end of the summit session or thereabouts, say 6.30pm.

 RSVP

 Cheers,
 Armando

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] dvr l3_snat

2014-11-07 Thread Armando M.
Not sure if you've seen this one too:

https://wiki.openstack.org/wiki/Neutron/DVR

Hope this helps!
Armando

On 7 November 2014 01:50, Li Tianqing jaze...@163.com wrote:

 Oh, thanks, i finally find it.
 it's all here.
 https://blueprints.launchpad.net/neutron/+spec/neutron-ovs-dvr

 Thanks a lot.

 --
 Best
 Li Tianqing

 At 2014-11-06 20:47:39, Henry henry4...@gmail.com wrote:

 Have you read previous posts? This topic had been discussed for a while.

 Sent from my iPad

 On 2014-11-6, at 下午6:18, Li Tianqing jaze...@163.com wrote:

 Hello,
why we put l3_snat on network node to handle North/South snat, and why
 don't we put it  on compute node?
Does it possible to put l3_agent on all compute_node for North/South
 snat, dnat, and east/west l3 routing?




 --
 Best
 Li Tianqing


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Spec freeze exception] Rootwrap daemon ode support

2014-11-08 Thread Armando M.
Hi Miguel,

Thanks for picking this up. Pull me in and I'd be happy to help!

Cheers,
Armando

On 7 November 2014 10:05, Miguel Ángel Ajo majop...@redhat.com wrote:


 Hi Yorik,

I was talking with Mark Mcclain a minute ago here at the summit about
 this. And he told me that now at the start of the cycle looks like a good
 moment to merge the spec  the root wrap daemon bits, so we have a lot of
 headroom for testing during the next months.

We need to upgrade the spec [1] to the new Kilo format.

Do you have some time to do it?, I can allocate some time and do it
 right away.

 [1] https://review.openstack.org/#/c/93889/
 --
 Miguel Ángel Ajo
 Sent with Sparrow http://www.sparrowmailapp.com/?sig

 On Thursday, 24 de July de 2014 at 01:42, Miguel Angel Ajo Pelayo wrote:

 +1

 Sent from my Android phone using TouchDown (www.nitrodesk.com)


 -Original Message-
 From: Yuriy Taraday [yorik@gmail.com]
 Received: Thursday, 24 Jul 2014, 0:42
 To: OpenStack Development Mailing List [openstack-dev@lists.openstack.org]

 Subject: [openstack-dev] [Neutron][Spec freeze exception] Rootwrap daemon
mode support


 Hello.

 I'd like to propose making a spec freeze exception for
 rootwrap-daemon-mode spec [1].

 Its goal is to save agents' execution time by using daemon mode for
 rootwrap and thus avoiding python interpreter startup time as well as sudo
 overhead for each call. Preliminary benchmark shows 10x+ speedup of the
 rootwrap interaction itself.

 This spec have a number of supporters from Neutron team (Carl and Miguel
 gave it their +2 and +1) and have all code waiting for review [2], [3], [4].
 The only thing that has been blocking its progress is Mark's -2 left when
 oslo.rootwrap spec hasn't been merged yet. Now that's not the case and code
 in oslo.rootwrap is steadily getting approved [5].

 [1] https://review.openstack.org/93889
 [2] https://review.openstack.org/82787
 [3] https://review.openstack.org/84667
 [4] https://review.openstack.org/107386
 [5]
 https://review.openstack.org/#/q/project:openstack/oslo.rootwrap+topic:bp/rootwrap-daemon-mode,n,z

 --

 Kind regards, Yuriy.
  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] VMware networking support

2014-11-13 Thread Armando M.
I chimed in on another thread, but I am reinstating my point just in case.

On 13 November 2014 04:38, Gary Kotton gkot...@vmware.com wrote:

  Hi,
 A few months back we started to work on a umbrella spec for Vmware
 networking support (https://review.openstack.org/#/c/105369). There are a
 number of different proposals for a number of different use cases. In
 addition to providing one another with an update of our progress we need to
 discuss the following challenges:

- At the summit there was talk about splitting out vendor code from
the neutron code base. The aforementioned specs are not being approved
until we have decided what we as a community want/need. We need to
understand how we can continue our efforts and not be blocked or hindered
by this debate.

 The proposal of allowing vendor plugin to be in full control of their own
destiny will be submitted as any other blueprint and will be discussed as
any other community effort. In my opinion, there is no need to be blocked
on waiting whether the proposal go anywhere. Spec, code and CI being
submitted will have minimal impact irrespective of any decision reached.

So my suggestion is to keep your code current with trunk, and do your 3rd
Party CI infrastructure homework, so that when we are ready to push the
trigger there will be no further delay.


- CI updates – in order to provide a new plugin we are required to
provide CI (yes, this is written in stone and in some cases marble)
- Additional support may be required in the following:
   - Nova – for example Neutron may be exposing extensions or
   functionality that requires Nova integrations
   - Devstack – In order to get CI up and running we need devatck
   support

 As a step forwards I would like to suggest that we meeting at
 #openstack-vmware channel on Tuesday at 15:00 UTC. Is this ok with everyone?
 Thanks
 Gary

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] L2 gateway as a service

2014-11-14 Thread Armando M.
Last Friday I recall we had two discussions around this topic. One in the
morning, which I think led to Maruti to push [1]. The way I understood [1]
was that it is an attempt at unifying [2] and [3], by choosing the API
approach of one and the architectural approach of the other.

[1] https://review.openstack.org/#/c/134179/
[2] https://review.openstack.org/#/c/100278/
[3] https://review.openstack.org/#/c/93613/

Then there was another discussion in the afternoon, but I am not 100% of
the outcome.

All this churn makes me believe that we probably just need to stop
pretending we can achieve any sort of consensus on the approach and let the
different alternatives develop independently, assumed they can all develop
independently, and then let natural evolution take its course :)

Ultimately the biggest debate is on what the API model needs to be for
these abstractions. We can judge on which one is the best API of all, but
sometimes this ends up being a religious fight. A good API for me might not
be a good API for you, even though I strongly believe that a good API is
one that can:

- be hard to use incorrectly
- clear to understand
- does one thing, and one thing well

So far I have been unable to be convinced why we'd need to cram more than
one abstraction in one single API, as it does violate the above mentioned
principles. Ultimately I like the L2 GW API proposed by 1 and 2 because
it's in line with those principles. I'd rather start from there and iterate.

My 2c,
Armando

On 14 November 2014 08:47, Salvatore Orlando sorla...@nicira.com wrote:

 Thanks guys.

 I think you've answered my initial question. Probably not in the way I was
 hoping it to be answered, but it's ok.

 So now we have potentially 4 different blueprint describing more or less
 overlapping use cases that we need to reconcile into one?
 If the above is correct, then I suggest we go back to the use case and
 make an effort to abstract a bit from thinking about how those use cases
 should be implemented.

 Salvatore

 On 14 November 2014 15:42, Igor Cardoso igordc...@gmail.com wrote:

 Hello all,
 Also, what about Kevin's https://review.openstack.org/#/c/87825/? One of
 its use cases is exactly the L2 gateway. These proposals could probably be
 inserted in a more generic work for moving existing datacenter L2 resources
 to Neutron.
 Cheers,

 On 14 November 2014 15:28, Mathieu Rohon mathieu.ro...@gmail.com wrote:

 Hi,

 As far as I understood last friday afternoon dicussions during the
 design summit, this use case is in the scope of another umbrella spec
 which would define external connectivity for neutron networks. Details
 of those connectivity would be defined through service plugin API.

 Ian do you plan to define such an umbrella spec? or at least, could
 you sum up the agreement of the design summit discussion in the ML?

 I see at least 3 specs which would be under such an umbrella spec :
 https://review.openstack.org/#/c/93329/ (BGPVPN)
 https://review.openstack.org/#/c/101043/ (Inter DC connectivity with
 VPN)
 https://review.openstack.org/#/c/134179/ (l2 gw aas)


 On Fri, Nov 14, 2014 at 1:13 PM, Salvatore Orlando sorla...@nicira.com
 wrote:
  Thanks Maruti,
 
  I have some comments and questions which I've posted on gerrit.
  There are two things I would like to discuss on the mailing list
 concerning
  this effort.
 
  1) Is this spec replacing  https://review.openstack.org/#/c/100278 and
  https://review.openstack.org/#/c/93613 - I hope so, otherwise this
 just adds
  even more complexity.
 
  2) It sounds like you should be able to implement this service plugin
 in
  either a feature branch or a repository distinct from neutron. Can you
  confirm that?
 
  Salvatore
 
  On 13 November 2014 13:26, Kamat, Maruti Haridas maruti.ka...@hp.com
  wrote:
 
  Hi Friends,
 
   As discussed during the summit, I have uploaded the spec for
 review
  at https://review.openstack.org/#/c/134179/
 
  Thanks,
  Maruti
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Igor Duarte Cardoso.
 http://igordcard.com
 @igordcard https://twitter.com/igordcard

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Re: [openstack-dev] [Neutron] LeastNetwork scheduling for DHCP

2014-11-14 Thread Armando M.
Benjamin,

Feel free to reach out. If you are referring to my -2, that was just
provisional.

Before we can go ahead and see an improved scheduling capability for DHCP,
you guys need to resolve the conflict between the overlapping blueprints,
working together or giving up one in favor on the other.

Cheers,
Armando

On 14 November 2014 07:28, GRASSART Benjamin 
benjamin.grass...@thalesgroup.com wrote:

 Hi all,



 I would definitely be glad to work on the subject as well.

 However I am not sure to understand fully Armando last remark in our
 change.



 I will try to discuss it with him on IRC.



 Regards,



 Benjamin GRASSART



 [@@ THALES GROUP INTERNAL @@]



 *De :* S M, Praveen Kumar [mailto:praveen-sm.ku...@hp.com]
 *Envoyé :* vendredi 7 novembre 2014 09:27
 *À :* Narasimhan, Vivekanandan; OpenStack Development Mailing List (not
 for usage questions)
 *Cc :* Beltur, Jayashree; GRASSART Benjamin; Sourabh Patwardhan
 (sopatwar); M, Shiva Kumar; A, Keshava
 *Objet :* RE: [Neutron] LeastNetwork scheduling for DHCP



 Hi Vivek,



 We are definitely interested in working on these blueprints
 collaboratively.



 We have a working implementation for our blueprint and received few
 important comments from Armando and addressing them currently.







 Regards

 Praveen.





 *From:* Narasimhan, Vivekanandan
 *Sent:* Thursday, November 06, 2014 9:09 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Cc:* Beltur, Jayashree; S M, Praveen Kumar;
 benjamin.grass...@thalesgroup.com; Sourabh Patwardhan (sopatwar)
 *Subject:* [Neutron] LeastNetwork scheduling for DHCP



 Hi Neutron Stackers,



 There is an interest among vendors to bring Least Networks scheduling for
 DHCP into Openstack Neutron.



 Currently there are the following blueprints lying there, all of them
 trying to address this issue:

 https://review.openstack.org/111210

 https://review.openstack.org/#/c/130912/

 https://review.openstack.org/104587



 We are trying  to pull together all these BPs as one Umbrella BP, on which
 we

 can pour volunteers from every side, to clear out this BP itself as
 initial step.



 So we would like to collaborate, to plan BP approval for these.



 Please respond if you are interested.



 --

 Thanks,



 Vivek





 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Core/Vendor code decomposition

2014-11-14 Thread Armando M.
Hello,

As follow-up action after the Design Summit Session on Core/Vendor split,
please find the proposal outlined here:

https://review.openstack.org/#/c/134680/

I know that Anita will tell me off since I asked for reviews on the ML, but
I felt that it was important to raise awareness, even more than necessary :)

I also want to stress the fact that this proposal was not going to be
possible without the help of everyone we talked to over the last few weeks,
and gave us constructive feedback.

Finally, a special thanks goes to Maru Newby and Kevin Benton who helped
with most parts of the proposal.

Let the review tango begin!

Cheers,
Armando
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] L2 gateway as a service

2014-11-17 Thread Armando M.
On 17 November 2014 01:13, Mathieu Rohon mathieu.ro...@gmail.com wrote:

 Hi

 On Fri, Nov 14, 2014 at 6:26 PM, Armando M. arma...@gmail.com wrote:
  Last Friday I recall we had two discussions around this topic. One in the
  morning, which I think led to Maruti to push [1]. The way I understood
 [1]
  was that it is an attempt at unifying [2] and [3], by choosing the API
  approach of one and the architectural approach of the other.
 
  [1] https://review.openstack.org/#/c/134179/
  [2] https://review.openstack.org/#/c/100278/
  [3] https://review.openstack.org/#/c/93613/
 
  Then there was another discussion in the afternoon, but I am not 100% of
 the
  outcome.

 Me neither, that's why I'd like ian, who led this discussion, to sum
 up the outcome from its point of view.

  All this churn makes me believe that we probably just need to stop
  pretending we can achieve any sort of consensus on the approach and let
 the
  different alternatives develop independently, assumed they can all
 develop
  independently, and then let natural evolution take its course :)

 I tend to agree, but I think that one of the reason why we are looking
 for a consensus, is because API evolutions proposed through
 Neutron-spec are rejected by core-dev, because they rely on external
 components (sdn controller, proprietary hardware...) or they are not a
 high priority for neutron core-dev.


I am not sure I agree with this statement. I am not aware of any proposal
here being dependent on external components as you suggested, but even if
it were, an API can be implemented in multiple ways, just like the (core)
Neutron API can be implemented using a fully open source solution or an
external party like an SDN controller.


 By finding a consensus, we show that several players are interested in
 such an API, and it helps to convince core-dev that this use-case, and
 its API, is missing in neutron.


Right, but it seems we are struggling to find this consensus. In this
particular instance, where we are trying to address the use case of L2
Gateway (i.e. allow Neutron logical networks to be extended with physical
ones), it seems that everyone has a different opinion as to what
abstraction we should adopt in order to express and configure the L2
gateway entity, and at the same time I see no convergence in sight.

Now if the specific L2 Gateway case were to be considered part of the core
Neutron API, then such a consensus would be mandatory IMO, but if it isn't,
is there any value in striving for that consensus at all costs? Perhaps
not, and we can have multiple attempts experiment and innovate
independently.

So far, all my data points seem to imply that such an abstraction need not
be part of the core API.


 Now, if there is room for easily propose new API in Neutron, It make
 sense to leave new API appear and evolve, and then  let natural
 evolution take its course , as you said.
 To me, this is in the scope of the advanced services project.


Advanced Services may be a misnomer, but an incubation feature, sure why
not?



  Ultimately the biggest debate is on what the API model needs to be for
 these
  abstractions. We can judge on which one is the best API of all, but
  sometimes this ends up being a religious fight. A good API for me might
 not
  be a good API for you, even though I strongly believe that a good API is
 one
  that can:
 
  - be hard to use incorrectly
  - clear to understand
  - does one thing, and one thing well
 
  So far I have been unable to be convinced why we'd need to cram more than
  one abstraction in one single API, as it does violate the above mentioned
  principles. Ultimately I like the L2 GW API proposed by 1 and 2 because
 it's
  in line with those principles. I'd rather start from there and iterate.
 
  My 2c,
  Armando
 
  On 14 November 2014 08:47, Salvatore Orlando sorla...@nicira.com
 wrote:
 
  Thanks guys.
 
  I think you've answered my initial question. Probably not in the way I
 was
  hoping it to be answered, but it's ok.
 
  So now we have potentially 4 different blueprint describing more or less
  overlapping use cases that we need to reconcile into one?
  If the above is correct, then I suggest we go back to the use case and
  make an effort to abstract a bit from thinking about how those use cases
  should be implemented.
 
  Salvatore
 
  On 14 November 2014 15:42, Igor Cardoso igordc...@gmail.com wrote:
 
  Hello all,
  Also, what about Kevin's https://review.openstack.org/#/c/87825/? One
 of
  its use cases is exactly the L2 gateway. These proposals could
 probably be
  inserted in a more generic work for moving existing datacenter L2
 resources
  to Neutron.
  Cheers,
 
  On 14 November 2014 15:28, Mathieu Rohon mathieu.ro...@gmail.com
 wrote:
 
  Hi,
 
  As far as I understood last friday afternoon dicussions during the
  design summit, this use case is in the scope of another umbrella spec
  which would define external connectivity for neutron networks. Details
  of those

Re: [openstack-dev] [Neutron][L2 Agent][Debt] Bootstrapping an L2 agent debt repayment task force

2014-11-18 Thread Armando M.
Hi Carl,

Thanks for kicking this off. I am also willing to help as a core reviewer
of blueprints and code
submissions only.

As for the ML2 agent, we all know that for historic reasons Neutron has
grown to be not only a networking orchestration project but also a
reference implementation that is resembling what some might call an SDN
controller.

I think that most of the Neutron folks realize that we need to move away
from this model and rely on a strong open source SDN alternative; for these
reasons, I don't think that pursuing an ML2 agent would be a path we should
go down to anymore. It's time and energy that could be more effectively
spent elsewhere, especially on the refactoring. Now if the refactoring
effort ends up being labelled ML2 Agent, I would be okay with it, but my
gut feeling tells me that any attempt at consolidating code to embrace more
than one agent logic at once is gonna derail the major goal of paying down
the so called agent debt.

My 2c
Armando
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][neutron] Proposal to split Neutron into separate repositories

2014-11-18 Thread Armando M.
Mark, Kyle,

What is the strategy for tracking the progress and all the details about
this initiative? Blueprint spec, wiki page, or something else?

One thing I personally found useful about the spec approach adopted in [1],
was that we could quickly and effectively incorporate community feedback;
having said that I am not sure that the same approach makes sense here,
hence the question.

Also, what happens for experimental efforts that are neither L2-3 nor L4-7
(e.g. TaaS or NFV related ones?), but they may still benefit from this
decomposition (as it promotes better separation of responsibilities)? Where
would they live? I am not sure we made any particular progress of the
incubator project idea that was floated a while back.

Cheers,
Armando

[1] https://review.openstack.org/#/c/134680/

On 18 November 2014 15:32, Doug Wiegley do...@a10networks.com wrote:

  Hi,

   so the specs repository would continue to be shared during the Kilo
 cycle.

  One of the reasons to split is that these two teams have different
 priorities and velocities.  Wouldn’t that be easier to track/manage as
 separate launchpad projects and specs repos, irrespective of who is
 approving them?

  Thanks,
 doug



  On Nov 18, 2014, at 10:31 PM, Mark McClain m...@mcclain.xyz wrote:

  All-

 Over the last several months, the members of the Networking Program have
 been discussing ways to improve the management of our program.  When the
 Quantum project was initially launched, we envisioned a combined service
 that included all things network related.  This vision served us well in
 the early days as the team mostly focused on building out layers 2 and 3;
 however, we’ve run into growth challenges as the project started building
 out layers 4 through 7.  Initially, we thought that development would float
 across all layers of the networking stack, but the reality is that the
 development concentrates around either layer 2 and 3 or layers 4 through
 7.  In the last few cycles, we’ve also discovered that these concentrations
 have different velocities and a single core team forces one to match the
 other to the detriment of the one forced to slow down.

 Going forward we want to divide the Neutron repository into two separate
 repositories lead by a common Networking PTL.  The current mission of the
 program will remain unchanged [1].  The split would be as follows:

 Neutron (Layer 2 and 3)
 - Provides REST service and technology agnostic abstractions for layer 2
 and layer 3 services.

 Neutron Advanced Services Library (Layers 4 through 7)
 - A python library which is co-released with Neutron
 - The advance service library provides controllers that can be configured
 to manage the abstractions for layer 4 through 7 services.

 Mechanics of the split:
 - Both repositories are members of the same program, so the specs
 repository would continue to be shared during the Kilo cycle.  The PTL and
 the drivers team will retain approval responsibilities they now share.
 - The split would occur around Kilo-1 (subject to coordination of the
 Infra and Networking teams). The timing is designed to enable the proposed
 REST changes to land around the time of the December development sprint.
 - The core team for each repository will be determined and proposed by
 Kyle Mestery for approval by the current core team.
 - The Neutron Server and the Neutron Adv Services Library would be
 co-gated to ensure that incompatibilities are not introduced.
 - The Advance Service Library would be an optional dependency of Neutron,
 so integrated cross-project checks would not be required to enable it
 during testing.
 - The split should not adversely impact operators and the Networking
 program should maintain standard OpenStack compatibility and deprecation
 cycles.

 This proposal to divide into two repositories achieved a strong consensus
 at the recent Paris Design Summit and it does not conflict with the current
 governance model or any proposals circulating as part of the ‘Big Tent’
 discussion.

 Kyle and mark

 [1]
 https://git.openstack.org/cgit/openstack/governance/plain/reference/programs.yaml
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] L2 gateway as a service

2014-11-18 Thread Armando M.
Hi,

On 18 November 2014 16:22, Ian Wells ijw.ubu...@cack.org.uk wrote:

 Sorry I'm a bit late to this, but that's what you get from being on
 holiday...  (Which is also why there are no new MTU and VLAN specs yet, but
 I swear I'll get to them.)


Ah! I hope it was good at least :)



 On 17 November 2014 01:13, Mathieu Rohon mathieu.ro...@gmail.com wrote:

 Hi

 On Fri, Nov 14, 2014 at 6:26 PM, Armando M. arma...@gmail.com wrote:
  Last Friday I recall we had two discussions around this topic. One in
 the
  morning, which I think led to Maruti to push [1]. The way I understood
 [1]
  was that it is an attempt at unifying [2] and [3], by choosing the API
  approach of one and the architectural approach of the other.
 
  [1] https://review.openstack.org/#/c/134179/
  [2] https://review.openstack.org/#/c/100278/
  [3] https://review.openstack.org/#/c/93613/
 
  Then there was another discussion in the afternoon, but I am not 100%
 of the
  outcome.

 Me neither, that's why I'd like ian, who led this discussion, to sum
 up the outcome from its point of view.


 So, the gist of what I said is that we have three, independent, use cases:

 - connecting two VMs that like to tag packets to each other (VLAN clean
 networks)
 - connecting many networks to a single VM (trunking ports)
 - connecting the outside world to a set of virtual networks

 We're discussing that last use case here.  The point I was made was that:

 - there are more encaps in the world than just VLANs
 - they can all be solved in the same way using an edge API


No disagreement all the way up to this point, assumed that I don't worry
about what this edge API really is.


 - if they are solved using an edge API, the job of describing the network
 you're trying to bring in (be it switch/port/vlan, or MPLS label stack, or
 l2tpv3 endpoint data) is best kept outside of Neutron's API, because
 Neutron can't usefully do anything with it other than validate it and hand
 it off to whatever network control code is being used.  (Note that most
 encaps will likely *not* be implemented in Neutron's inbuilt control code.)


This is where the disagreement begins, as far as I am concerned; in fact we
already have a well defined way of describing what a network entity in
Neutron is, namely an L2 broadcast domain abstraction. An L2 gateway API
that is well defined and well scoped should just express how one can be
connected to another, nothing more, at least as a starting point.



 Now, the above argument says that we should keep this out of Neutron.  The
 problem with that is that people are using the OVS mechanism driver and
 would like a solution that works with that, implying something that's
 *inside* Neutron.  For that case, it's certainly valid to consider another
 means of implementation, but it wouldn't be my personal choice.  (For what
 it's worth I'm looking at ODL based controller implementations, so this
 isn't an issue for me personally.)

 If one were to implement the code in the Neutron API, even as an
 extension, I would question whether it's a sensible thing to attempt before
 the RPC server/REST server split is done, since it also extends the API
 between them.

  All this churn makes me believe that we probably just need to stop
  pretending we can achieve any sort of consensus on the approach and let
 the
  different alternatives develop independently, assumed they can all
 develop
  independently, and then let natural evolution take its course :)

 I tend to agree, but I think that one of the reason why we are looking
 for a consensus, is because API evolutions proposed through
 Neutron-spec are rejected by core-dev, because they rely on external
 components (sdn controller, proprietary hardware...) or they are not a
 high priority for neutron core-dev.
 By finding a consensus, we show that several players are interested in
 such an API, and it helps to convince core-dev that this use-case, and
 its API, is missing in neutron.


 There are lots of players interested in an API, that much is clear, and
 all the more so if you consider that this feature has strong analogies with
 use cases such as switch port exposure and MPLS.  The problem is that it's
 clearly a fairly complex API with some variety of ways to implement it, and
 both of these things work against its acceptance.  Additionally, per the
 above discussion, I would say it's not essential for it to be core Neutron
 functionality.


 Now, if there is room for easily propose new API in Neutron, It make
 sense to leave new API appear and evolve, and then  let natural
 evolution take its course , as you said.


 Natural selection works poorly on APIs because once they exist they're
 hard to change and/or retire, due to backward compatibility requirements.


Well, that is true assumed that someone can or is willing to use them :)




 To me, this is in the scope of the advanced services project.


 Advanced services or no, the point I was making is that this is not
 something

Re: [openstack-dev] [neutron] L2 gateway as a service

2014-11-20 Thread Armando M.
Hi Sukhdev,

Hope you enjoyed Europe ;)

On 19 November 2014 17:19, Sukhdev Kapur sukhdevka...@gmail.com wrote:

 Folks,

 Like Ian, I am jumping in this very late as well - as I decided to travel
 Europe after the summit, just returned back and  catching up :-):-)

 I have noticed that this thread has gotten fairly convoluted and painful
 to read.

 I think Armando summed it up well in the beginning of the thread. There
 are basically three written proposals (listed in Armando's email - I pasted
 them again here).

 [1] https://review.openstack.org/#/c/134179/
 [2] https://review.openstack.org/#/c/100278/
 [3] https://review.openstack.org/#/c/93613/

 On this thread I see that the authors of first two proposals have already
 agreed to consolidate and work together. This leaves with two proposals.
 Both Ian and I were involved with the third proposal [3] and have
 reasonable idea about it. IMO, the use cases addressed by the third
 proposal are very similar to use cases addressed by proposal [1] and [2]. I
 can volunteer to  follow up with Racha and Stephen from Ericsson to see if
 their use case will be covered with the new combined proposal. If yes, we
 have one converged proposal. If no, then we modify the proposal to
 accommodate their use case as well. Regardless, I will ask them to review
 and post their comments on [1].

 Having said that, this covers what we discussed during the morning session
 on Friday in Paris. Now, comes the second part which Ian brought up in the
 afternoon session on Friday.
 My initial reaction was, when heard his use case, that this new
 proposal/API should cover that use case as well (I am being bit optimistic
 here :-)). If not, rather than going into the nitty gritty details of the
 use case, let's see what modification is required to the proposed API to
 accommodate Ian's use case and adjust it accordingly.

 Now, the last point (already brought up by Salvatore as well as Armando) -
 the abstraction of the API, so that it meets the Neutron API criteria. I
 think this is the critical piece. I also believe the API proposed by [1] is
 very close. We should clean it up and take out references to ToR's or
 physical vs virtual devices. The API should work at an abstract level so
 that it can deal with both physical as well virtual devices. If we can
 agree to that, I believe we can have a solid solution.


Yes, I do think that the same API can target both: a 100% software solution
for L2GW as well as one that may want to rely on hardware support, in the
same spirit of any other Neutron API. I made the same point on spec [1].




 Having said that I would like to request the community to review the
 proposal submitted by Maruti in [1] and post comments on the spec with the
 intent to get a closure on the API. I see lots of good comments already on
 the spec. Lets get this done so that we can have a workable (even if not
 perfect) version of API in Kilo cycle. Something which we can all start to
 play with. We can always iterate over it, and make change as we get more
 and more use cases covered.


So far it seems like proposal [1] that has the most momentum. I'd like to
consider [3] as one potential software implementation of the proposed API.
As I mentioned earlier, I'd rather start with a well defined problem, free
of any potential confusion or open to subjective interpretation; a loose
API suffers from both pitfalls, hence my suggestion to go with API proposed
in [1].



 Make sense?

 cheers..
 -Sukhdev


 On Tue, Nov 18, 2014 at 6:44 PM, Armando M. arma...@gmail.com wrote:

 Hi,

 On 18 November 2014 16:22, Ian Wells ijw.ubu...@cack.org.uk wrote:

 Sorry I'm a bit late to this, but that's what you get from being on
 holiday...  (Which is also why there are no new MTU and VLAN specs yet, but
 I swear I'll get to them.)


 Ah! I hope it was good at least :)



 On 17 November 2014 01:13, Mathieu Rohon mathieu.ro...@gmail.com
 wrote:

 Hi

 On Fri, Nov 14, 2014 at 6:26 PM, Armando M. arma...@gmail.com wrote:
  Last Friday I recall we had two discussions around this topic. One in
 the
  morning, which I think led to Maruti to push [1]. The way I
 understood [1]
  was that it is an attempt at unifying [2] and [3], by choosing the API
  approach of one and the architectural approach of the other.
 
  [1] https://review.openstack.org/#/c/134179/
  [2] https://review.openstack.org/#/c/100278/
  [3] https://review.openstack.org/#/c/93613/
 
  Then there was another discussion in the afternoon, but I am not 100%
 of the
  outcome.

 Me neither, that's why I'd like ian, who led this discussion, to sum
 up the outcome from its point of view.


 So, the gist of what I said is that we have three, independent, use
 cases:

 - connecting two VMs that like to tag packets to each other (VLAN clean
 networks)
 - connecting many networks to a single VM (trunking ports)
 - connecting the outside world to a set of virtual networks

 We're discussing that last use case here.  The point I

Re: [openstack-dev] [neutron] L2 gateway as a service

2014-11-20 Thread Armando M.



 Beyond APIs there are two more things to mention.
 First, we need some sort of open source reference implementation for every
 use case. For hardware VTEP obviously this won't be possible, but perhaps
 [1] can be used for integration tests.


I think, once the API settled there may be multiple implementations for
bridging logical nets with physical ones; one could be hardware vtep schema
implemented by a switch, one other could be the same schema implemented on
a white box, or something totally different. If we designed the API
correctly, that should not matter. I believe that [1] should clearly state
that. I'll make sure this point is captured.


 The complexity of providing this implementation might probably drive the
 roadmap for supporting L2 GW use cases.
 Second, I still believe this is an advanced service and therefore a
 candidate for being outside of neutron's main repo (which, if you're
 following the discussions does not mean outside of neutron). The
 arguments I've seen so far do not yet convince me this thing has to be
 tightly integrated into the core neutron.


My working assumption is that this is going to bake elsewhere outside the
core. However this will need integration hooks to the core the same way
other advanced services do, so it's of paramount that the vendor spin-off
goes ahead, so that efforts, like this one, can evolve at the pace they are
comfortable with.




 Salvatore


 [1] http://openvswitch.org/pipermail/dev/2013-October/032530.html





 Make sense?

 cheers..
 -Sukhdev


 On Tue, Nov 18, 2014 at 6:44 PM, Armando M. arma...@gmail.com wrote:

 Hi,

 On 18 November 2014 16:22, Ian Wells ijw.ubu...@cack.org.uk wrote:

 Sorry I'm a bit late to this, but that's what you get from being on
 holiday...  (Which is also why there are no new MTU and VLAN specs yet, but
 I swear I'll get to them.)


 Ah! I hope it was good at least :)



 On 17 November 2014 01:13, Mathieu Rohon mathieu.ro...@gmail.com
 wrote:

 Hi

 On Fri, Nov 14, 2014 at 6:26 PM, Armando M. arma...@gmail.com wrote:
  Last Friday I recall we had two discussions around this topic. One
 in the
  morning, which I think led to Maruti to push [1]. The way I
 understood [1]
  was that it is an attempt at unifying [2] and [3], by choosing the
 API
  approach of one and the architectural approach of the other.
 
  [1] https://review.openstack.org/#/c/134179/
  [2] https://review.openstack.org/#/c/100278/
  [3] https://review.openstack.org/#/c/93613/
 
  Then there was another discussion in the afternoon, but I am not
 100% of the
  outcome.

 Me neither, that's why I'd like ian, who led this discussion, to sum
 up the outcome from its point of view.


 So, the gist of what I said is that we have three, independent, use
 cases:

 - connecting two VMs that like to tag packets to each other (VLAN clean
 networks)
 - connecting many networks to a single VM (trunking ports)
 - connecting the outside world to a set of virtual networks

 We're discussing that last use case here.  The point I was made was
 that:

 - there are more encaps in the world than just VLANs
 - they can all be solved in the same way using an edge API


 No disagreement all the way up to this point, assumed that I don't worry
 about what this edge API really is.


 - if they are solved using an edge API, the job of describing the
 network you're trying to bring in (be it switch/port/vlan, or MPLS label
 stack, or l2tpv3 endpoint data) is best kept outside of Neutron's API,
 because Neutron can't usefully do anything with it other than validate it
 and hand it off to whatever network control code is being used.  (Note that
 most encaps will likely *not* be implemented in Neutron's inbuilt control
 code.)


 This is where the disagreement begins, as far as I am concerned; in fact
 we already have a well defined way of describing what a network entity in
 Neutron is, namely an L2 broadcast domain abstraction. An L2 gateway API
 that is well defined and well scoped should just express how one can be
 connected to another, nothing more, at least as a starting point.



 Now, the above argument says that we should keep this out of Neutron.
 The problem with that is that people are using the OVS mechanism driver and
 would like a solution that works with that, implying something that's
 *inside* Neutron.  For that case, it's certainly valid to consider another
 means of implementation, but it wouldn't be my personal choice.  (For what
 it's worth I'm looking at ODL based controller implementations, so this
 isn't an issue for me personally.)

 If one were to implement the code in the Neutron API, even as an
 extension, I would question whether it's a sensible thing to attempt before
 the RPC server/REST server split is done, since it also extends the API
 between them.

  All this churn makes me believe that we probably just need to stop
  pretending we can achieve any sort of consensus on the approach and
 let the
  different alternatives develop independently

Re: [openstack-dev] [neutron] - the setup of a DHCP sub-group

2014-11-26 Thread Armando M.
Hi Don,

You should look at this one:

https://wiki.openstack.org/wiki/NeutronSubteamCharters

Also, it would be good to start feeding the content of that gdoc into a
neutron-specs blueprint, using template [1] and process [2], bearing in
mind these dates [3]

1.
http://git.openstack.org/cgit/openstack/neutron-specs/tree/specs/template.rst
2. https://wiki.openstack.org/wiki/Blueprints
3. https://wiki.openstack.org/wiki/NeutronKiloProjectPlan

HTH
Armando


On 24 November 2014 at 14:27, Carl Baldwin c...@ecbaldwin.net wrote:

 Don,

 Could the spec linked to your BP be moved to the specs repository?
 I'm hesitant to start reading it as a google doc when I know I'm going
 to want to make comments and ask questions.

 Carl

 On Thu, Nov 13, 2014 at 9:19 AM, Don Kehn dek...@gmail.com wrote:
  If this shows up twice sorry for the repeat:
 
  Armando, Carl:
  During the Summit, Armando and I had a very quick conversation concern a
  blue print that I submitted,
  https://blueprints.launchpad.net/neutron/+spec/dhcp-cpnr-integration and
  Armando had mention the possibility of getting together a sub-group
 tasked
  with DHCP Neutron concerns. I have talk with Infoblox folks (see
  https://blueprints.launchpad.net/neutron/+spec/neutron-ipam), and
 everyone
  seems to be in agreement that there is synergy especially concerning the
  development of a relay and potentially looking into how DHCP is handled.
 In
  addition during the Fridays meetup session on DHCP that I gave there
 seems
  to be some general interest by some of the operators as well.
 
  So what would be the formality in going forth to start a sub-group and
  getting this underway?
 
  DeKehn
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Changes to the core team

2014-12-02 Thread Armando M.
Congrats to Henry and Kevin, +1!
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Core/Vendor code decomposition

2014-12-05 Thread Armando M.
Hi folks,

For a few weeks now the Neutron team has worked tirelessly on [1].

This initiative stems from the fact that as the project matures, evolution
of processes and contribution guidelines need to evolve with it. This is to
ensure that the project can keep on thriving in order to meet the needs of
an ever growing community.

The effort of documenting intentions, and fleshing out the various details
of the proposal is about to reach an end, and we'll soon kick the tires to
put the proposal into practice. Since the spec has grown pretty big, I'll
try to capture the tl;dr below.

If you have any comment please do not hesitate to raise them here and/or
reach out to us.

tl;dr 

From the Kilo release, we'll initiate a set of steps to change the
following areas:

   - Code structure: every plugin or driver that exists or wants to exist
   as part of Neutron project is decomposed in an slim vendor integration
   (which lives in the Neutron repo), plus a bulkier vendor library (which
   lives in an independent publicly available repo);
   - Contribution process: this extends to the following aspects:
  - Design and Development: the process is largely unchanged for the
  part that pertains the vendor integration; the maintainer team is fully
  auto governed for the design and development of the vendor library;
  - Testing and Continuous Integration: maintainers will be required to
  support their vendor integration with 3rd CI testing; the
requirements for
  3rd CI testing are largely unchanged;
  - Defect management: the process is largely unchanged, issues
  affecting the vendor library can be tracked with whichever
tool/process the
  maintainer see fit. In cases where vendor library fixes need to be
  reflected in the vendor integration, the usual OpenStack defect
management
  apply.
  - Documentation: there will be some changes to the way plugins and
  drivers are documented with the intention of promoting discoverability of
  the integrated solutions.
   - Adoption and transition plan: we strongly advise maintainers to stay
   abreast of the developments of this effort, as their code, their CI, etc
   will be affected. The core team will provide guidelines and support
   throughout this cycle the ensure a smooth transition.

To learn more, please refer to [1].

Many thanks,
Armando

[1] https://review.openstack.org/#/c/134680
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron documentation to update about new vendor plugin, but without code in repository?

2014-12-05 Thread Armando M.
For anyone who had an interest in following this thread, they might want to
have a look at [1], and [2] (which is the tl;dr version [1]).

HTH
Armando

[1] https://review.openstack.org/#/c/134680
[2]
http://lists.openstack.org/pipermail/openstack-dev/2014-December/052346.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Core/Vendor code decomposition

2014-12-09 Thread Armando M.
 Kotton gkot...@vmware.com wrote:

  Hi Kyle,
  I am not missing the point. I understand the proposal. I just think
 that it has some shortcomings (unless I misunderstand, which will certainly
 not be the first time and most definitely not the last). The thinning out
 is to have a shim in place. I understand this and this will be the entry
 point for the plugin. I do not have a concern for this. My concern is that
 we are not doing this with the ML2 off the bat. That should lead by example
 as it is our reference architecture. Lets not kid anyone, but we are going
 to hit some problems with the decomposition. I would prefer that it be done
 with the default implementation. Why?

 The proposal is to move vendor-specific logic out of the tree to increase
 vendor control over such code while decreasing load on reviewers.  ML2
 doesn’t contain vendor-specific logic - that’s the province of ML2 drivers
 - so it is not a good target for the proposed decomposition by itself.


• Cause we will fix them quicker as it is something that prevent
 Neutron from moving forwards
• We will just need to fix in one place first and not in N (where
 N is the vendor plugins)
• This is a community effort – so we will have a lot more eyes on
 it
• It will provide a reference architecture for all new plugins
 that want to be added to the tree
• It will provide a working example for plugin that are already
 in tree and are to be replaced by the shim
  If we really want to do this, we can say freeze all development (which
 is just approvals for patches) for a few days so that we will can just
 focus on this. I stated what I think should be the process on the review.
 For those who do not feel like finding the link:
• Create a stack forge project for ML2
• Create the shim in Neutron
• Update devstack for the to use the two repos and the shim
  When #3 is up and running we switch for that to be the gate. Then we
 start a stopwatch on all other plugins.

 As was pointed out on the spec (see Miguel’s comment on r15), the ML2
 plugin and the OVS mechanism driver need to remain in the main Neutron repo
 for now.  Neutron gates on ML2+OVS and landing a breaking change in the
 Neutron repo along with its corresponding fix to a separate ML2 repo would
 be all but impossible under the current integrated gating scheme.
 Plugins/drivers that do not gate Neutron have no such constraint.


 Maru


  Sure, I’ll catch you on IRC tomorrow. I guess that you guys will bash
 out the details at the meetup. Sadly I will not be able to attend – so you
 will have to delay on the tar and feathers.
  Thanks
  Gary
 
 
  From: mest...@mestery.com mest...@mestery.com
  Reply-To: OpenStack List openstack-dev@lists.openstack.org
  Date: Sunday, December 7, 2014 at 7:19 PM
  To: OpenStack List openstack-dev@lists.openstack.org
  Cc: openst...@lists.openstack.org openst...@lists.openstack.org
  Subject: Re: [openstack-dev] [Neutron] Core/Vendor code decomposition
 
  Gary, you are still miss the point of this proposal. Please see my
 comments in review. We are not forcing things out of tree, we are thinning
 them. The text you quoted in the review makes that clear. We will look at
 further decomposing ML2 post Kilo, but we have to be realistic with what we
 can accomplish during Kilo.
 
  Find me on IRC Monday morning and we can discuss further if you still
 have questions and concerns.
 
  Thanks!
  Kyle
 
  On Sun, Dec 7, 2014 at 2:08 AM, Gary Kotton gkot...@vmware.com wrote:
  Hi,
  I have raised my concerns on the proposal. I think that all plugins
 should be treated on an equal footing. My main concern is having the ML2
 plugin in tree whilst the others will be moved out of tree will be
 problematic. I think that the model will be complete if the ML2 was also
 out of tree. This will help crystalize the idea and make sure that the
 model works correctly.
  Thanks
  Gary
 
  From: Armando M. arma...@gmail.com
  Reply-To: OpenStack List openstack-dev@lists.openstack.org
  Date: Saturday, December 6, 2014 at 1:04 AM
  To: OpenStack List openstack-dev@lists.openstack.org, 
 openst...@lists.openstack.org openst...@lists.openstack.org
  Subject: [openstack-dev] [Neutron] Core/Vendor code decomposition
 
  Hi folks,
 
  For a few weeks now the Neutron team has worked tirelessly on [1].
 
  This initiative stems from the fact that as the project matures,
 evolution of processes and contribution guidelines need to evolve with it.
 This is to ensure that the project can keep on thriving in order to meet
 the needs of an ever growing community.
 
  The effort of documenting intentions, and fleshing out the various
 details of the proposal is about to reach an end, and we'll soon kick the
 tires to put the proposal into practice. Since the spec has grown pretty
 big, I'll try to capture the tl;dr below.
 
  If you have any comment please do not hesitate to raise them here
 and/or reach out to us.
 
  tl;dr

Re: [openstack-dev] [neutron] Vendor Plugin Decomposition and NeutronClient vendor extension

2014-12-12 Thread Armando M.
On 12 December 2014 at 22:18, Ryu Ishimoto r...@midokura.com wrote:


 Hi All,

 It's great to see the vendor plugin decomposition spec[1] finally getting
 merged!  Now that the spec is completed, I have a question on how this may
 impact neutronclient, and in particular, its handling of vendor extensions.


Thanks for the excitement :)



 One of the great things about splitting out the plugins is that it will
 allow vendors to implement vendor extensions more rapidly.  Looking at the
 neutronclient code, however, it seems that these vendor extension commands
 are embedded inside the project, and doesn't seem easily extensible.  It
 feels natural that, now that neutron vendor code is split out,
 neutronclient should also do the same.

 Of course, you could always fork neutronclient yourself, but I'm wondering
 if there is any plan on improving this.  Admittedly, I don't have a great
 solution myself but I'm thinking something along the line of allowing
 neutronclient to load commands from an external directory.  I am not
 familiar enough with neutronclient to know if there are technical
 limitation to what I'm suggesting, but I would love to hear thoughts of
 others on this.


There is quite a bit of road ahead of us. We haven't thought or yet
considered how to handle extensions client side. Server side, the extension
mechanism is already quite flexible, but we gotta learn to walk before we
can run!

Having said that your points are well taken, but most likely we won't be
making much progress on these until we have provided and guaranteed a
smooth transition for all plugins and drivers as suggested by the spec
referenced below. Stay tuned!

Cheers,
Armando



 Thanks in advance!

 Best,
 Ryu

 [1] https://review.openstack.org/#/c/134680/

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Core/Vendor code decomposition

2014-12-12 Thread Armando M.
On 12 December 2014 at 23:01, Yuriy Shovkoplias yshovkopl...@mirantis.com
wrote:

 Dear neutron community,

 Can you please clarify couple points on the vendor code decomposition?
  - Assuming I would like to create the new driver now (Kilo development
 cycle) - is it already allowed (or mandatory) to follow the new process?

 https://review.openstack.org/#/c/134680/


Yes. See [1] for more details.


 - Assuming the new process is already in place, are the following
 guidelines still applicable for the vendor integration code (not for vendor
 library)?

 https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers
 The following is a list of requirements for inclusion of code upstream:

- Participation in Neutron meetings, IRC channels, and email lists.
- A member of the plugin/driver team participating in code reviews of
other upstream code.


I see no reason why you wouldn't follow those guidelines, as a general rule
of thumb. Having said that, some of the wording would need to be tweaked to
take into account of the new contribution model. Bear in mind, that I
started adding some developer documentation in [2], to give a practical
guide to the proposal. More to follow.

Cheers,
Armando

[1]
http://docs-draft.openstack.org/80/134680/17/check/gate-neutron-specs-docs/2a7afdd/doc/build/html/specs/kilo/core-vendor-decomposition.html#adoption-and-deprecation-policy
[2]
https://review.openstack.org/#/q/status:open+project:openstack/neutron+branch:master+topic:bp/core-vendor-decomposition,n,z


 Regards,
 Yuri

 On Thu, Dec 11, 2014 at 3:23 AM, Gary Kotton gkot...@vmware.com wrote:


 On 12/11/14, 12:50 PM, Ihar Hrachyshka ihrac...@redhat.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512
 
 +100. I vote -1 there and would like to point out that we *must* keep
 history during the split, and split from u/s code base, not random
 repositories. If you don't know how to achieve this, ask oslo people,
 they did it plenty of times when graduating libraries from
 oslo-incubator.
 /Ihar
 
 On 10/12/14 19:18, Cedric OLLIVIER wrote:
  https://review.openstack.org/#/c/140191/
 
  2014-12-09 18:32 GMT+01:00 Armando M. arma...@gmail.com
  mailto:arma...@gmail.com:
 
 
  By the way, if Kyle can do it in his teeny tiny time that he has
  left after his PTL duties, then anyone can do it! :)
 
  https://review.openstack.org/#/c/140191/

 This patch looses the recent hacking changes that we have made. This is a
 slight example to try and highly the problem that we may incur as a
 community.

 
  Fully cloning Dave Tucker's repository [1] and the outdated fork of
  the ODL ML2 MechanismDriver included raises some questions (e.g.
  [2]). I wish the next patch set removes some files. At least it
  should take the mainstream work into account (e.g. [3]) .
 
  [1] https://github.com/dave-tucker/odl-neutron-drivers [2]
  https://review.openstack.org/#/c/113330/ [3]
  https://review.openstack.org/#/c/96459/
 
 
  ___ OpenStack-dev
  mailing list OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 -BEGIN PGP SIGNATURE-
 Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
 
 iQEcBAEBCgAGBQJUiXcIAAoJEC5aWaUY1u57dBMH/17unffokpb0uxqewPYrPNMI
 ukDzG4dW8mIP3yfbVNsHQXe6gWj/kj/SkBWJrO13BusTu8hrr+DmOmmfF/42s3vY
 E+6EppQDoUjR+QINBwE46nU+E1w9hIHyAZYbSBtaZQ32c8aQbmHmF+rgoeEQq349
 PfpPLRI6MamFWRQMXSgF11VBTg8vbz21PXnN3KbHbUgzI/RS2SELv4SWmPgKZCEl
 l1K5J1/Vnz2roJn4pr/cfc7vnUIeAB5a9AuBHC6o+6Je2RDy79n+oBodC27kmmIx
 lVGdypoxZ9tF3yfRM9nngjkOtozNzZzaceH0Sc/5JR4uvNReVN4exzkX5fDH+SM=
 =dfe/
 -END PGP SIGNATURE-
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Services are now split out and neutron is open for commits!

2014-12-13 Thread Armando M.
This was more of a brute force fix!

I didn't have time to go with finesse, and instead I went in with the
hammer :)

That said, we want to make sure that the upgrade path to Kilo is as
painless as possible, so we'll need to review the Release Notes [1] to
reflect the fact that we'll be providing a seamless migration to the new
adv services structure.

[1] https://wiki.openstack.org/wiki/ReleaseNotes/Kilo#Upgrade_Notes_6


Cheers,
Armando

On 12 December 2014 at 09:33, Kyle Mestery mest...@mestery.com wrote:

 This has merged now, FYI.

 On Fri, Dec 12, 2014 at 10:28 AM, Doug Wiegley do...@a10networks.com
 wrote:

  Hi all,

  Neutron grenade jobs have been failing since late afternoon Thursday,
 due to split fallout.  Armando has a fix, and it’s working it’s way through
 the gate:

  https://review.openstack.org/#/c/141256/

  Get your rechecks ready!

  Thanks,
 Doug


   From: Douglas Wiegley do...@a10networks.com
 Date: Wednesday, December 10, 2014 at 10:29 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [neutron] Services are now split out and
 neutron is open for commits!

   Hi all,

  I’d like to echo the thanks to all involved, and thanks for the
 patience during this period of transition.

  And a logistical note: if you have any outstanding reviews against the
 now missing files/directories (db/{loadbalancer,firewall,vpn}, services/,
 or tests/unit/services), you must re-submit your review against the new
 repos.  Existing neutron reviews for service code will be summarily
 abandoned in the near future.

  Lbaas folks, hold off on re-submitting feature/lbaasv2 reviews.  I’ll
 have that branch merged in the morning, and ping in channel when it’s ready
 for submissions.

  Finally, if any tempest lovers want to take a crack at splitting the
 tempest runs into four, perhaps using salv’s reviews of splitting them in
 two as a guide, and then creating jenkins jobs, we need some help getting
 those going.  Please ping me directly (IRC: dougwig).

  Thanks,
 doug


   From: Kyle Mestery mest...@mestery.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: Wednesday, December 10, 2014 at 4:10 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [neutron] Services are now split out and
 neutron is open for commits!

   Folks, just a heads up that we have completed splitting out the
 services (FWaaS, LBaaS, and VPNaaS) into separate repositores. [1] [2] [3].
 This was all done in accordance with the spec approved here [4]. Thanks to
 all involved, but a special thanks to Doug and Anita, as well as infra.
 Without all of their work and help, this wouldn't have been possible!

 Neutron and the services repositories are now open for merges again.
 We're going to be landing some major L3 agent refactoring across the 4
 repositories in the next four days, look for Carl to be leading that work
 with the L3 team.

  In the meantime, please report any issues you have in launchpad [5] as
 bugs, and find people in #openstack-neutron or send an email. We've
 verified things come up and all the tempest and API tests for basic neutron
 work fine.

 In the coming week, we'll be getting all the tests working for the
 services repositories. Medium term, we need to also move all the advanced
 services tempest tests out of tempest and into the respective repositories.
 We also need to beef these tests up considerably, so if you want to help
 out on a critical project for Neutron, please let me know.

 Thanks!
 Kyle

 [1] http://git.openstack.org/cgit/openstack/neutron-fwaas
 [2] http://git.openstack.org/cgit/openstack/neutron-lbaas
 [3] http://git.openstack.org/cgit/openstack/neutron-vpnaas
 [4]
 http://git.openstack.org/cgit/openstack/neutron-specs/tree/specs/kilo/services-split.rst
 [5] https://bugs.launchpad.net/neutron

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Minimal ML2 mechanism driver after Neutron decomposition change

2014-12-15 Thread Armando M.
On 15 December 2014 at 09:53, Neil Jerram neil.jer...@metaswitch.com
wrote:

 Hi all,

 Following the approval for Neutron vendor code decomposition
 (https://review.openstack.org/#/c/134680/), I just wanted to comment
 that it appears to work fine to have an ML2 mechanism driver _entirely_
 out of tree, so long as the vendor repository that provides the ML2
 mechanism driver does something like this to register their driver as a
 neutron.ml2.mechanism_drivers entry point:

   setuptools.setup(
   ...,
   entry_points = {
   ...,
   'neutron.ml2.mechanism_drivers': [
   'calico = xyz.openstack.mech_xyz:XyzMechanismDriver',
   ],
   },
   )

 (Please see

 https://github.com/Metaswitch/calico/commit/488dcd8a51d7c6a1a2f03789001c2139b16de85c
 for the complete change and detail, for the example that works for me.)

 Then Neutron and the vendor package can be separately installed, and the
 vendor's driver name configured in ml2_conf.ini, and everything works.

 Given that, I wonder:

 - is that what the architects of the decomposition are expecting?


 - other than for the reference OVS driver, are there any reasons in
   principle for keeping _any_ ML2 mechanism driver code in tree?


The approach you outlined is reasonable, and new plugins/drivers, like
yours, may find it easier to approach Neutron integration this way.
However, to ensure a smoother migration path for existing plugins and
drivers, it was deemed more sensible to go down the path being proposed in
the spec referenced above.



 Many thanks,
  Neil

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Minimal ML2 mechanism driver after Neutron decomposition change

2014-12-15 Thread Armando M.



 Good questions. I'm also looking for the linux bridge MD, SRIOV MD...
 Who will be responsible for these drivers?

 Excellent question. In my opinion, 'technology' specific but not vendor
 specific MD (like SRIOV) should not be maintained by specific vendor. It
 should be accessible for all interested parties for contribution.


I don't think that anyone is making the suggestion of making these drivers
develop in silos, but instead one of the objective is to allow them to
evolve more rapidly, and in the open, where anyone can participate.



 The OVS driver is maintained by Neutron community, vendor specific
 hardware driver by vendor, SDN controllers driver by their own community or
 vendor. But there are also other drivers like SRIOV, which are general for
 a lot of vendor agonitsc backends, and can't be maintained by a certain
 vendor/community.


Certain technologies, like the ones mentioned above may require specific
hardware; even though they may not be particularly associated with a
specific vendor, some sort of vendor support is indeed required, like 3rd
party CI. So, grouping them together under an hw-accelerated umbrella, or
whichever other name that sticks, may make sense long term should the
number of drivers really ramp up as hinted below.



 So, it would be better to keep some general backend MD in tree besides
 SRIOV. There are also vif-type-tap, vif-type-vhostuser,
 hierarchy-binding-external-VTEP ... We can implement a very thin in-tree
 base MD that only handle vif bind which is backend agonitsc, then backend
 provider is free to implement their own service logic, either by an backend
 agent, or by a driver derived from the base MD for agentless scenery.

 Keeping general backend MDs in tree sounds reasonable.
 Regards

  Many thanks,
   Neil
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Changes to the core team

2015-01-15 Thread Armando M.
+1

On 15 January 2015 at 14:46, Edgar Magana edgar.mag...@workday.com wrote:

  +1 For adding Doug as Core in Neutron!

  I have seen his work on the services part and he is a great member of
 the OpenStack community!

  Edgar

   From: Kyle Mestery mest...@mestery.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Thursday, January 15, 2015 at 2:31 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [neutron] Changes to the core team

The last time we looked at core reviewer stats was in December [1]. In
 looking at the current stats, I'm going to propose some changes to the core
 team. Reviews are the most important part of being a core reviewer, so we
 need to ensure cores are doing reviews. The stats for the 90 day period [2]
 indicate some changes are needed for core reviewers who are no longer
 reviewing on pace with the other core reviewers.

  First of all, I'm removing Sumit Naiksatam from neutron-core. Sumit has
 been a core reviewer for a long time, and his past contributions are very
 much thanked by the entire OpenStack Neutron team. If Sumit jumps back in
 with thoughtful reviews in the future, we can look at getting him back as a
 Neutron core reviewer. But for now, his stats indicate he's not reviewing
 at a level consistent with the rest of the Neutron core reviewers.

  As part of the change, I'd like to propose Doug Wiegley as a new Neutron
 core reviewer. Doug has been actively reviewing code across not only all
 the Neutron projects, but also other projects such as infra. His help and
 work in the services split in December were the reason we were so
 successful in making that happen. Doug has also been instrumental in the
 Neutron LBaaS V2 rollout, as well as helping to merge code in the other
 neutron service repositories.

 I'd also like to take this time to remind everyone that reviewing code is
 a responsibility, in Neutron the same as other projects. And core reviewers
 are especially beholden to this responsibility. I'd also like to point out
 that +1/-1 reviews are very useful, and I encourage everyone to continue
 reviewing code even if you are not a core reviewer.

 Existing neutron cores, please vote +1/-1 for the addition of Doug to the
 core team.

 Thanks!
 Kyle

 [1]
 http://lists.openstack.org/pipermail/openstack-dev/2014-December/051986.html
 [2] http://russellbryant.net/openstack-stats/neutron-reviewers-90.txt

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] DVR and l2_population

2015-02-11 Thread Armando M.
L2pop is a requirement.

With the existing agent-based architecture, L2pop is used to update the FDB
tables on the compute hosts to make east/west traffic possible whenever a
new port is created or existing one is updated.

Cheers,
Armando

On 10 February 2015 at 23:07, Itzik Brown itz...@redhat.com wrote:

 Hi,

 In the Networking guide [1] there is a requirement for l2 population both
 as
 a mechanism driver and in OVS agent when enabling DVR.
 Is this is a requirement and if so what is the reason?

 Thanks,
 Itzik


 [1] http://docs.openstack.org/networking-guide/content/ha-dvr.html

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] per-agent/driver/plugin requirements

2015-02-18 Thread Armando M.
On 17 February 2015 at 22:00, YAMAMOTO Takashi yamam...@valinux.co.jp
wrote:

 hi,

 i want to add an extra requirement specific to OVS-agent.
 (namely, I want to add ryu for ovs-ofctl-to-python blueprint. [1]
 but the question is not specific to the blueprint.)
 to avoid messing deployments without OVS-agent, such a requirement
 should be per-agent/driver/plugin/etc.  however, there currently
 seems no standard mechanism for such a requirement.

 some ideas:

 a. don't bother to make it per-agent.
add it to neutron's requirements. (and global-requirement)
simple, but this would make non-ovs plugin users unhappy.

 b. make devstack look at per-agent extra requirements file in neutron tree.
eg. neutron/plugins/$Q_AGENT/requirements.txt

 c. move OVS agent to a separate repository, just like other
after-decomposition vendor plugins.  and use requirements.txt there.
for longer term, this might be a way to go.  but i don't want to
block my work until it happens.

 d. follow the way how openvswitch is installed by devstack.
a downside: we can't give a jenkins run for a patch which introduces
an extra requirement.  (like my patch for the mentioned blueprint [2])

 i think b. is the most reasonable choice, at least for short/mid term.

 any comments/thoughts?


One thing that I want to ensure we are clear on is about the agent's
OpenFlow communication strategy going forward, because that determines how
we make a decision based on the options you have outlined: do we enforce
the use of ryu while ovs-ofctl goes away from day 0? Or do we provide an
'opt-in' type of approach where users can explicitly choose if/when to
adopt ryu in favor of ovs-ofctl? The latter means that we'll keep both
solutions for a reasonable amount of time to smooth the transition process.

If we adopt the former (i.e. ryu goes in, ovs-ofctl goes out), then option
a) makes sense to me, but I am not sure how happy deployers, and packagers
are going to be welcoming this approach. There's already too much going on
in Kilo right now :)

If we adopt the latter, then I think it's desirable to have two separate
configurations with which we test the agent. This means that we'll have a
new job (besides the existing ones) that runs the agent with ryu instead of
ovs-ofctl. This means that option d) is the viable one, where DevStack will
have to install the dependency based on some configuration variable that is
determined by the openstack-infra project definition.

Thoughts?

Cheers,
Armando



 YAMAMOTO Takashi

 [1] https://blueprints.launchpad.net/neutron/+spec/ovs-ofctl-to-python
 [2] https://review.openstack.org/#/c/153946/

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] neutron-drivers meeting

2015-02-17 Thread Armando M.
Hi folks,

I was wondering if we should have a special neutron-drivers meeting on
Wednesday Feb 18th (9:30AM CST / 7:30AM PST) to discuss recent patches
where a few cores have not reached consensus on, namely:

- https://review.openstack.org/#/c/155373/
- https://review.openstack.org/#/c/148318/

The Kilo cycle end is fast approaching and a speedy resolution of these
matters would be better. I fear that leaving these items to the Open
Discussion slot in the weekly IRC meeting will not give us enough time.

Is there any other item where we need to get consensus on?

Anyone is welcome to join.

Thanks,
Armando
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Use of egg snapshots of neutron code in neutron-*aas projects/distributing openstack

2015-02-17 Thread Armando M.
I also failed to understand the issue, and I commented on the bug report,
where it's probably best to continue this conversation.

Thanks,
Armando

On 16 February 2015 at 07:54, Ihar Hrachyshka ihrac...@redhat.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 02/16/2015 04:13 PM, James Page wrote:
  Hi Folks
 
  The split-out drivers for vpn/fw/lb as-a-service all make use of a
  generated egg of the neutron git repository as part of their unit
  test suite dependencies.
 
  This presents a bit of a challenge for us downstream in
  distributions,

 I am packaging neutron for RDO, but I fail to understand your issue.

  as we can't really pull in a full source egg of neutron from
  git.openstack.org; we have the code base for neutron core
  available (python-neutron), but that does not appear to be enough
  (see [0]).

 Don't you ship egg files with python-neutron package? I think this
 should be enough to get access to entry points.

 
  I would appreciate if dev's working in this area could a) review
  the bug and the problems we are seeing a b) think about how this
  can work for distributions - I'm happy to create a new
  'neutron-testing' type package from the neutron codebase to support
  this stuff, but right now I'm a bit unclear on exactly what its
  needs to contain!
 
  Cheers
 
  James
 
 
  [0] https://bugs.launchpad.net/neutron/+bug/1422376
 
 
 
 __
 
 
 OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1

 iQEcBAEBAgAGBQJU4hLIAAoJEC5aWaUY1u57+QEH/1ZaBmuEpIHiW1/67Lh452PU
 o2dXy3fy23fns/9GUHbXn6ASRPi5usEqe4Qa+Z0jaVnipIQcdjvGZg8RET2KQsyo
 RsmLJlOJHA2USJP62PvbkgZ5bmIlFSIi0vgNs75904tGp+UqGkpW4VZ/KTYyzVL2
 kpBaMfJxHdjmEnPAdfk14u5kHkblavGqQO7plmjCRncFkUy63m/qWQ2zjQbpUxCZ
 wZJ1FTNqA16mo4ThFzdn/br5Mqeopfkcwht7EQV/cCYz6b9Y0oU4qXmL5qy/k8Xz
 yyU9hLagPrffLf0hJWdf3Zt0K3FqYDND1GJRvjgGvKSri4ylRt1zG07RG1ZdiWg=
 =QffD
 -END PGP SIGNATURE-

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] oslo.messaging 1.6.0 released

2015-01-27 Thread Armando M.
Is there any chance that this release might have caused bug [1]? I am still
root-causing what's going on...any input highly appreciated.

Thanks,
Armando

[1] https://bugs.launchpad.net/grenade/+bug/1415284

On 27 January 2015 at 14:23, Doug Hellmann d...@doughellmann.com wrote:

 There were some issues with the build job, so this release has just gone
 live. I apologize for the delay.

 Doug

 On Tue, Jan 27, 2015, at 02:05 PM, Doug Hellmann wrote:
  The Oslo team is pleased to announce the release of:
 
  oslo.messaging 1.6.0: Oslo Messaging API
 
  The primary reason for this release is to move the code
  out of the oslo namespace package as part of
 
 https://blueprints.launchpad.net/oslo-incubator/+spec/drop-namespace-packages
 
  This release also includes requirements updates, and several months worth
  of bug fixes.
 
  For more details, please see the git log history below and:
 
  http://launchpad.net/oslo.messaging/+milestone/1.6.0
 
  Please report issues through launchpad:
 
  http://bugs.launchpad.net/oslo.messaging
  Changes in /home/dhellmann/repos/openstack/oslo.messaging 1.5.1..1.6.0
  --
 
  bfb8c97 Updated from global requirements
  eb92511 Expose _impl_test for designate
  ee31a84 Update Oslo imports to remove namespace package
  563376c Speedup the rabbit tests
  f286ef1 Fix functionnal tests
  db7371c Fixed docstring for Notifier
  386f5da zmq: Refactor test case shared code
  7680897 Add more private symbols to the old namespace package
  2832051 Updated from global requirements
  b888ee3 Fixes test_two_pools_three_listener
  0c49f0d Add TimerTestCase missing tests case
  be9fca7 fix qpid test issue with eventlet monkey patching
  0ca1b1e Make setup.cfg packages include oslo.messaging
  408d0da Upgrade to hacking 0.10
  a6d068a Add oslo.messaging._drivers.common for heat tests
  1fa0e6a Port zmq driver to Python 3
  bc8675a fix qpid test issue with eventlet monkey patching
  e55a83e Move files out of the namespace package
  31a149a Add a info log when a reconnection occurs
  44132d4 rabbit: fix timeout timer when duration is None
  c18f9f7 Don't log each received messages
  3e2d142 Fix some comments in a backporting review session
  c40ba04 Enable IPv6-support in libzmq by default
  372bc49 Add a thread + futures executor based executor
  56a9c55 safe_log Sanitize Passwords in List of Dicts
  709c401 Updated from global requirements
  98bfdd1 rabbit: add some tests when rpc_backend is set
  d3e6ea1 Warns user if thread monkeypatch is not done
  cd71c47 Add functional and unit 0mq driver tests
  15aa5cb The executor doesn't need to set the timeout
  43a9dc1 qpid: honor iterconsume timeout
  023b7f4 rabbit: more precise iterconsume timeout
  737afde Workflow documentation is now in infra-manual
  66db2b3 Touch up grammar in warning messages
  4e6dabb Make the RPCVersionCapError message clearer
  254405d Doc: 'wait' releases driver connection, not 'stop'
  09cd9c0 Don't allow call with fanout target
  0844037 Add an optional executor callback to dispatcher
  eb21f6b Warn user if needed when the process is forked
  7ad0d7e Fix reconnect race condition with RabbitMQ cluster
  1624793 Add more TLS protocols to rabbit impl
  6987b8a Fix incorrect attribute name in matchmaker_redis
 
  Diffstat (except docs and test files)
  -
 
  CONTRIBUTING.rst   |   7 +-
  oslo/messaging/__init__.py |  15 +
  oslo/messaging/_cmd/__init__.py|   1 -
  oslo/messaging/_cmd/zmq_receiver.py|  39 -
  oslo/messaging/_drivers/__init__.py|   1 -
  oslo/messaging/_drivers/amqp.py| 222 -
  oslo/messaging/_drivers/amqpdriver.py  | 472 --
  oslo/messaging/_drivers/base.py| 108 ---
  oslo/messaging/_drivers/common.py  | 343 +---
  oslo/messaging/_drivers/impl_fake.py   | 233 -
  oslo/messaging/_drivers/impl_qpid.py   | 731 
  oslo/messaging/_drivers/impl_rabbit.py | 783
  -
  oslo/messaging/_drivers/impl_zmq.py| 941
  
  oslo/messaging/_drivers/matchmaker.py  | 321 ---
  oslo/messaging/_drivers/matchmaker_redis.py| 139 ---
  oslo/messaging/_drivers/matchmaker_ring.py | 104 ---
  oslo/messaging/_drivers/pool.py|  88 --
  oslo/messaging/_drivers/protocols/__init__.py  |   0
  oslo/messaging/_drivers/protocols/amqp/__init__.py |   0
  .../_drivers/protocols/amqp/controller.py  | 589 -
  oslo/messaging/_drivers/protocols/amqp/driver.py   | 295 ---
  .../messaging/_drivers/protocols/amqp/eventloop.py | 339 ---
  oslo/messaging/_drivers/protocols/amqp/opts.py |  73 --
  oslo/messaging/_executors/base.py  |  33 +-
  

Re: [openstack-dev] The state of nova-network to neutron migration

2015-01-09 Thread Armando M.

 If we were standing at a place with a detailed manual upgrade document
 that explained how to do minimal VM downtime, that a few ops had gone
 through and proved out, that would be one thing. And we could figure out
 which parts made sense to put tooling around to make this easier for
 everyone.

 But we seem far from there.

 My suggestion is to start with a detailed document, figure out that it
 works, and build automation around that process.


The problem is that whatever documented solution we can come up with is
going to be so opinionated to be hardly of any use on general terms, let
alone worth automating. Furthermore, its lifespan is going to be reasonably
limited which to me doesn't seem to justify enough the engineering cost,
and it's not like we haven't been trying...

I am not suggesting we give up entirely, but perhaps we should look at the
operator cases (for those who cannot afford cold migrations, or more simply
stand up a new cloud to run side-by-side with old cloud, and leave the old
one running until it drains), individually. This means having someone
technical who has a deep insight into these operator's environments lead
the development effort required to adjust the open source components to
accommodate whatever migration process makes sense to them. Having someone
championing a general effort from the 'outside' does not sound like an
efficient use of anyone's time.

So this goes back to the question: who can effectively lead the technical
effort? I personally don't think we can have Neutron cores or Nova cores
lead this effort and be effective, if they don't have direct
access/knowledge to these cloud platforms, and everything that pertains to
them.

Armando


 -Sean

 --
 Sean Dague
 http://dague.net

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron extenstions

2015-03-19 Thread Armando M.
Forwarding my reply to the other thread here:



If my memory does not fail me, changes to the API (new resources, new
resource attributes or new operations allowed to resources) have always
been done according to these criteria:

   - an opt-in approach: this means we know the expected behavior of the
   plugin as someone has coded the plugin in such a way that the API change is
   supported;
   - an opt-out approach: if the API change does not require explicit
   backend support, and hence can be deemed supported by all plugins.
   - a 'core' extension (ones available in neutron/extensions) should be
   implemented at least by the reference implementation;

Now, there might have been examples in the past where criteria were not
met, but these should be seen as exceptions rather than the rule, and as
such, fixed as defects so that an attribute/resource/operation that is
accidentally exposed to a plugin will either be honored as expected or an
appropriate failure is propagated to the user. Bottom line, the server must
avoid to fail silently, because failing silently is bad for the user.

Now both features [1] and [2] violated the opt-in criterion above: they
introduced resources attributes in the core models, forcing an undetermined
behavior on plugins.

I think that keeping [3,4] as is can lead to a poor user experience; IMO
it's unacceptable to let a user specify the attribute, and see that
ultimately the plugin does not support it. I'd be fine if this was an
accident, but doing this by design is a bit evil. So, I'd suggest the
following, in order to keep the features in Kilo:

   - Patches [3, 4] did introduce config flags to control the plugin
   behavior, but it looks like they were not applied correctly; for instance,
   the vlan_transparent case was only applied to ML2. Similarly the MTU config
   flag was not processed server side to ensure that plugins that do not
   support advertisement do not fail silently. This needs to be rectified.
   - As for VLAN transparency, we'd need to implement work item 5 (of 6) of
   spec [2], as this extension without at least a backend able to let tagged
   traffic pass doesn't seem right.
   - Ensure we sort out the API tests so that we know how the features
   behave.

Now granted that controlling the API via config flags is not the best
solution, as this was always handled through the extension mechanism, but
since we've been talking about moving away from extension attributes with
[5], it does sound like a reasonable stop-gap solution.

Thoughts?
Armando

[1]
http://specs.openstack.org/openstack/neutron-specs/specs/kilo/mtu-selection-and-advertisement.html
[2]
http://specs.openstack.org/openstack/neutron-specs/specs/kilo/nfv-vlan-trunks.html
[3]
https://review.openstack.org/#/q/project:openstack/neutron+branch:master+topic:bp/mtu-selection-and-advertisement,n,z
[4]
https://review.openstack.org/#/q/project:openstack/neutron+branch:master+topic:bp/nfv-vlan-trunks,n,z
[5] https://review.openstack.org/#/c/136760/

On 19 March 2015 at 14:56, Ian Wells ijw.ubu...@cack.org.uk wrote:

 On 19 March 2015 at 11:44, Gary Kotton gkot...@vmware.com wrote:

 Hi,
 Just the fact that we did this does not make it right. But I guess that we
 are starting to bend the rules. I think that we really need to be far more
 diligent about this kind of stuff. Having said that we decided the
 following on IRC:
 1. Mtu will be left in the core (all plugins should be aware of this and
 treat it if necessary)
 2. Vlan-transparency will be moved to an extension. Pritesh is working on
 this.


 The spec started out as an extension, and in its public review people
 requested that it not be an extension and that it should instead be core.
 I accept that we can change our minds, but I believe there should be a good
 reason for doing so.  You haven't given that reason here and you haven't
 even said who the 'we' is that decided this.  Also, as the spec author, I
 had a conversation with you all but there was no decision at the end of it
 (I presume that came afterward) and I feel that I have a reasonable right
 to be involved.  Could you at least summarise your reasoning here?

 I admit that I prefer this to be in core, but I'm not terribly choosy and
 that's not why I'm asking.  I'm more concerned that this is changing our
 mind at literally the last moment, and in turn wasting a developer's time,
 when there was a perfectly good process to debate this before coding was
 begun, and again when the code was up for review, both of which apparently
 failed.  I'd like to understand how we avoid getting here again in the
 future.  I'd also like to be certain we are not simply reversing a choice
 on a whim.
 --
 Ian.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Re: [openstack-dev] [Neutron] VLAN transparency support

2015-03-19 Thread Armando M.
If my memory does not fail me, changes to the API (new resources, new
resource attributes or new operations allowed to resources) have always
been done according to these criteria:

   - an opt-in approach: this means we know the expected behavior of the
   plugin as someone has coded the plugin in such a way that the API change is
   supported;
   - an opt-out approach: if the API change does not require explicit
   backend support, and hence can be deemed supported by all plugins.
   - a 'core' extension (ones available in neutron/extensions) should be
   implemented at least by the reference implementation;

Now, there might have been examples in the past where criteria were not
met, but these should be seen as exceptions rather than the rule, and as
such, fixed as defects so that an attribute/resource/operation that is
accidentally exposed to a plugin will either be honored as expected or an
appropriate failure is propagated to the user. Bottom line, the server must
avoid to fail silently, because failing silently is bad for the user.

Now both features [1] and [2] violated the opt-in criterion above: they
introduced resources attributes in the core models, forcing an undetermined
behavior on plugins.

I think that keeping [3,4] as is can lead to a poor user experience; IMO
it's unacceptable to let a user specify the attribute, and see that
ultimately the plugin does not support it. I'd be fine if this was an
accident, but doing this by design is a bit evil. So, I'd suggest the
following, in order to keep the features in Kilo:

   - Patches [3, 4] did introduce config flags to control the plugin
  behavior, but it looks like they were not applied correctly; for
instance,
  the vlan_transparent case was only applied to ML2. Similarly the
MTU config
  flag was not processed server side to ensure that plugins that do not
  support advertisement do not fail silently. This needs to be rectified.
  - As for VLAN transparency, we'd need to implement work item 5 (of 6)
  of spec [2], as this extension without at least a backend able to let
  tagged traffic pass doesn't seem right.
  - Ensure we sort out the API tests so that we know how the features
  behave.

Now granted that controlling the API via config flags is not the best
solution, as this was always handled through the extension mechanism, but
since we've been talking about moving away from extension attributes with
[5], it does sound like a reasonable stop-gap solution.

Thoughts?
Armando

[1]
http://specs.openstack.org/openstack/neutron-specs/specs/kilo/mtu-selection-and-advertisement.html
[2]
http://specs.openstack.org/openstack/neutron-specs/specs/kilo/nfv-vlan-trunks.html
[3]
https://review.openstack.org/#/q/project:openstack/neutron+branch:master+topic:bp/mtu-selection-and-advertisement,n,z
[4]
https://review.openstack.org/#/q/project:openstack/neutron+branch:master+topic:bp/nfv-vlan-trunks,n,z
[5] https://review.openstack.org/#/c/136760/

On 19 March 2015 at 12:01, Gary Kotton gkot...@vmware.com wrote:

  With regards to the MTU can you please point me to where we validate
 that the MTU defined by the tenant is actually = the supported MTU on the
 network. I did not see this in the code (maybe I missed something).


   From: Ian Wells ijw.ubu...@cack.org.uk
 Reply-To: OpenStack List openstack-dev@lists.openstack.org
 Date: Thursday, March 19, 2015 at 8:44 PM
 To: OpenStack List openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron] VLAN transparency support

Per the other discussion on attributes, I believe the change walks in
 historical footsteps and it's a matter of project policy choice.  That
 aside, you raised a couple of other issues on IRC:

 - backward compatibility with plugins that haven't adapted their API -
 this is addressed in the spec, which should have been implemented in the
 patches (otherwise I will downvote the patch myself) - behaviour should be
 as before with the additional feature that you can now tell more about what
 the plugin is thinking
  - whether they should be core or an extension - this is a more personal
 opinion, but on the grounds that all networks are either trunks or not, and
 all networks have MTUs, I think they do want to be core.  I would like to
 see plugin developers strongly encouraged to consider what they can do on
 both elements, whereas an extension tends to sideline functionality from
 view so that plugin writers don't even know it's there for consideration.

  Aside from that, I'd like to emphasise the value of these patches, so
 hopefully we can find a way to get them in in some form in this cycle.  I
 admit I'm interested in them because they make it easier to do NFV.  But
 they also help normal cloud users and operators, who otherwise have to do
 some really strange things [1].  I think it's maybe a little unfair to post
 reversion patches before discussion, particularly when the patch works,
 passes tests and implements an approved 

Re: [openstack-dev] [Neutron] Neutron extenstions

2015-03-20 Thread Armando M.
In order to track this, and for Kyle's sanity, I have created these two RC1
bugs:

- https://bugs.launchpad.net/neutron/+bug/1434667
- https://bugs.launchpad.net/neutron/+bug/1434671

Please, let's make sure that whatever approach we decide on, the resulting
code fix targets those two bugs.

Thanks,
Armando

On 20 March 2015 at 09:51, Armando M. arma...@gmail.com wrote:



 On 19 March 2015 at 23:59, Akihiro Motoki amot...@gmail.com wrote:

 Forwarding my reply to the other thread too...

 Multiple threads on the same topic is confusing.
 Can we use this thread if we continue the discussion?
 (The title of this thread looks approapriate)

 
 API extension is the only way that users know which features are
 available unitl we support API microversioning (v2.1 or something).
 I believe VLAN transparency support should be implemented as an
 extension, not by changing the core resources attribute directly.
 Otherwise users (including Horizon) cannot know we field is available or
 not.

 Even though VLAN transparency and MTU suppotrs are basic features, it
 is better to be implemented as an extension.
 Configuration does not help from API perspective as it is not visible
 through the API.


 I was only suggesting the configuration-based approach because it was
 simpler and it didn't lead to the evil mixin business. Granted it does not
 help from the API perspective, but we can hardly claim good discoverability
 of the API capabilities anyway :)

 That said, I'd be ok with moving one or both of these attributes to the
 extension framework. I thought that consensus on having them as core
 resources had been reached at the time the spec proposal.



 We are discussing moving away from extension attributes as Armando
 commented,
 but I think it is discussed about resources/attributes which are
 already used well and required.
 It looks natural to me that new resources/attributes are implemented
 via an extension.
 The situation may be changed once we have support of API microversioning.
 (It is being discussed in the context of Nova API microvesioning in
 the dev list started by Jay Pipes.)

 In my understanding, the case of IPv6 two mode is an exception.
 From the initial design we would like to have fully support of IPv6 in
 subnet resource,
 but through the discussion of IPv6 support it turns out some more
 modes are required,
 and we decided to change the subnet core resource. It is the exception.

 Thanks,
 Akihiro

 2015-03-20 8:23 GMT+09:00 Armando M. arma...@gmail.com:
  Forwarding my reply to the other thread here:
 
  
 
  If my memory does not fail me, changes to the API (new resources, new
  resource attributes or new operations allowed to resources) have always
 been
  done according to these criteria:
 
  an opt-in approach: this means we know the expected behavior of the
 plugin
  as someone has coded the plugin in such a way that the API change is
  supported;
  an opt-out approach: if the API change does not require explicit backend
  support, and hence can be deemed supported by all plugins.
  a 'core' extension (ones available in neutron/extensions) should be
  implemented at least by the reference implementation;
 
  Now, there might have been examples in the past where criteria were not
 met,
  but these should be seen as exceptions rather than the rule, and as
 such,
  fixed as defects so that an attribute/resource/operation that is
  accidentally exposed to a plugin will either be honored as expected or
 an
  appropriate failure is propagated to the user. Bottom line, the server
 must
  avoid to fail silently, because failing silently is bad for the user.
 
  Now both features [1] and [2] violated the opt-in criterion above: they
  introduced resources attributes in the core models, forcing an
 undetermined
  behavior on plugins.
 
  I think that keeping [3,4] as is can lead to a poor user experience; IMO
  it's unacceptable to let a user specify the attribute, and see that
  ultimately the plugin does not support it. I'd be fine if this was an
  accident, but doing this by design is a bit evil. So, I'd suggest the
  following, in order to keep the features in Kilo:
 
  Patches [3, 4] did introduce config flags to control the plugin
 behavior,
  but it looks like they were not applied correctly; for instance, the
  vlan_transparent case was only applied to ML2. Similarly the MTU config
 flag
  was not processed server side to ensure that plugins that do not support
  advertisement do not fail silently. This needs to be rectified.
  As for VLAN transparency, we'd need to implement work item 5 (of 6) of
 spec
  [2], as this extension without at least a backend able to let tagged
 traffic
  pass doesn't seem right.
  Ensure we sort out the API tests so that we know how the features
 behave.
 
  Now granted that controlling the API via config flags is not the best
  solution, as this was always handled through the extension mechanism,
 but
  since we've been talking about moving away

Re: [openstack-dev] [Neutron] Neutron extenstions

2015-03-20 Thread Armando M.
On 19 March 2015 at 23:59, Akihiro Motoki amot...@gmail.com wrote:

 Forwarding my reply to the other thread too...

 Multiple threads on the same topic is confusing.
 Can we use this thread if we continue the discussion?
 (The title of this thread looks approapriate)

 
 API extension is the only way that users know which features are
 available unitl we support API microversioning (v2.1 or something).
 I believe VLAN transparency support should be implemented as an
 extension, not by changing the core resources attribute directly.
 Otherwise users (including Horizon) cannot know we field is available or
 not.

 Even though VLAN transparency and MTU suppotrs are basic features, it
 is better to be implemented as an extension.
 Configuration does not help from API perspective as it is not visible
 through the API.


I was only suggesting the configuration-based approach because it was
simpler and it didn't lead to the evil mixin business. Granted it does not
help from the API perspective, but we can hardly claim good discoverability
of the API capabilities anyway :)

That said, I'd be ok with moving one or both of these attributes to the
extension framework. I thought that consensus on having them as core
resources had been reached at the time the spec proposal.



 We are discussing moving away from extension attributes as Armando
 commented,
 but I think it is discussed about resources/attributes which are
 already used well and required.
 It looks natural to me that new resources/attributes are implemented
 via an extension.
 The situation may be changed once we have support of API microversioning.
 (It is being discussed in the context of Nova API microvesioning in
 the dev list started by Jay Pipes.)

 In my understanding, the case of IPv6 two mode is an exception.
 From the initial design we would like to have fully support of IPv6 in
 subnet resource,
 but through the discussion of IPv6 support it turns out some more
 modes are required,
 and we decided to change the subnet core resource. It is the exception.

 Thanks,
 Akihiro

 2015-03-20 8:23 GMT+09:00 Armando M. arma...@gmail.com:
  Forwarding my reply to the other thread here:
 
  
 
  If my memory does not fail me, changes to the API (new resources, new
  resource attributes or new operations allowed to resources) have always
 been
  done according to these criteria:
 
  an opt-in approach: this means we know the expected behavior of the
 plugin
  as someone has coded the plugin in such a way that the API change is
  supported;
  an opt-out approach: if the API change does not require explicit backend
  support, and hence can be deemed supported by all plugins.
  a 'core' extension (ones available in neutron/extensions) should be
  implemented at least by the reference implementation;
 
  Now, there might have been examples in the past where criteria were not
 met,
  but these should be seen as exceptions rather than the rule, and as such,
  fixed as defects so that an attribute/resource/operation that is
  accidentally exposed to a plugin will either be honored as expected or an
  appropriate failure is propagated to the user. Bottom line, the server
 must
  avoid to fail silently, because failing silently is bad for the user.
 
  Now both features [1] and [2] violated the opt-in criterion above: they
  introduced resources attributes in the core models, forcing an
 undetermined
  behavior on plugins.
 
  I think that keeping [3,4] as is can lead to a poor user experience; IMO
  it's unacceptable to let a user specify the attribute, and see that
  ultimately the plugin does not support it. I'd be fine if this was an
  accident, but doing this by design is a bit evil. So, I'd suggest the
  following, in order to keep the features in Kilo:
 
  Patches [3, 4] did introduce config flags to control the plugin behavior,
  but it looks like they were not applied correctly; for instance, the
  vlan_transparent case was only applied to ML2. Similarly the MTU config
 flag
  was not processed server side to ensure that plugins that do not support
  advertisement do not fail silently. This needs to be rectified.
  As for VLAN transparency, we'd need to implement work item 5 (of 6) of
 spec
  [2], as this extension without at least a backend able to let tagged
 traffic
  pass doesn't seem right.
  Ensure we sort out the API tests so that we know how the features behave.
 
  Now granted that controlling the API via config flags is not the best
  solution, as this was always handled through the extension mechanism, but
  since we've been talking about moving away from extension attributes with
  [5], it does sound like a reasonable stop-gap solution.
 
  Thoughts?
  Armando
 
  [1]
 
 http://specs.openstack.org/openstack/neutron-specs/specs/kilo/mtu-selection-and-advertisement.html
  [2]
 
 http://specs.openstack.org/openstack/neutron-specs/specs/kilo/nfv-vlan-trunks.html
  [3]
 
 https://review.openstack.org/#/q/project:openstack/neutron

Re: [openstack-dev] [neutron] Proposal to add Ihar Hrachyshka as a Neutron Core Reviewer

2015-03-04 Thread Armando M.
+1!

On 4 March 2015 at 22:29, Kevin Benton blak...@gmail.com wrote:

 +1
 On Mar 4, 2015 12:25 PM, Maru Newby ma...@redhat.com wrote:

 +1 from me, Ihar has been doing great work and it will be great to have
 him finally able to merge!

  On Mar 4, 2015, at 11:42 AM, Kyle Mestery mest...@mestery.com wrote:
 
  I'd like to propose that we add Ihar Hrachyshka to the Neutron core
 reviewer team. Ihar has been doing a great job reviewing in Neutron as
 evidence by his stats [1]. Ihar is the Oslo liaison for Neutron, he's been
 doing a great job keeping Neutron current there. He's already a critical
 reviewer for all the Neutron repositories. In addition, he's a stable
 maintainer. Ihar makes himself available in IRC, and has done a great job
 working with the entire Neutron team. His reviews are thoughtful and he
 really takes time to work with code submitters to ensure his feedback is
 addressed.
 
  I'd also like to again remind everyone that reviewing code is a
 responsibility, in Neutron the same as other projects. And core reviewers
 are especially beholden to this responsibility. I'd also like to point out
 and reinforce that +1/-1 reviews are super useful, and I encourage everyone
 to continue reviewing code across Neutron as well as the other OpenStack
 projects, regardless of your status as a core reviewer on these projects.
 
  Existing Neutron cores, please vote +1/-1 on this proposal to add Ihar
 to the core reviewer team.
 
  Thanks!
  Kyle
 
  [1] http://stackalytics.com/report/contribution/neutron-group/90
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] VLAN trunking network for NFV

2015-03-24 Thread Armando M.
From spec [1], I read:


   - Of the core drivers, the VLAN and OVS drivers will be marked as not
   supporting VLAN transparent networks and the LB, VXLAN and GRE drivers will
   be marked as supporting VLAN transparent networks. Other drivers will have
   legacy behaviour.

I can't seem to find in the code where this is implemented though. Can you
elaborate?

This may be besides the point, but I really clash with the idea that we
provide a reference implementation on something we don't have CI for...for
that reason, I am starting to become really wary of the shape this has been
merged in. Let's hope we tie the appropriate loose ends in the next couple
of days, otherwise we're left with no other option than pulling this out of
Kilo.

A

[1]
http://specs.openstack.org/openstack/neutron-specs/specs/kilo/nfv-vlan-trunks.html





On 24 March 2015 at 11:17, Ian Wells ijw.ubu...@cack.org.uk wrote:

 That spec ensures that you can tell what the plugin is doing.  You can ask
 for a VLAN transparent network, but the cloud may tell you it can't make
 one.

 The OVS driver in Openstack drops VLAN tagged packets, I'm afraid, and the
 spec you're referring to doesn't change that.  The spec does ensure that if
 you try and create a VLAN trunk on a cloud that uses the OVS driver, you'll
 be told you can't.  in the future, the OVS driver can be fixed, but that's
 how things stand at present.  Fixing the OVS driver really involves getting
 in at the OVS flow level - can be done, but we started with the basics.

 If you want to use a VLAN trunk using the current code, I recommend VXLAN
 or GRE along with the Linuxbridge driver, both of which support VLAN
 transparent networking.  If they're configured and you ask for a VLAN trunk
 you'll be told you got one.
 --
 Ian.


 On 24 March 2015 at 09:43, Daniele Casini daniele.cas...@dektech.com.au
 wrote:

 Hi all:

 in reference to the following specification about the creation of VLAN
 trunking network for NFV

 https://review.openstack.org/#/c/136554/3/specs/kilo/nfv-vlan-trunks.rst

 I would like to better understand how the tagged traffic will be
 realized. In order to explain myself, I report the following use case:

 A VNF is deployed in one VM, which has a trunk port carrying traffic for
 two VLANs over a single link able to transport more than one VLAN through a
 single integration-bridge (br-int) port. So, How does br-int manage the
 VLAN-ID? In other words, what are the action performed by the br-int when a
 VM forwards traffic to another host?
 Does it put an additional tag or replace the existing one keeping the
 match with a table or something like that?

 Thank you very much.

 Daniele


 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] A big tent home for Neutron backend code

2015-04-23 Thread Armando M.
On 23 April 2015 at 07:32, Russell Bryant rbry...@redhat.com wrote:

 On 04/22/2015 10:33 PM, Armando M. wrote:
 
  Would it make sense to capture these projects as simply
  'affiliated', ie. with a loose relationship to Neutron, because
  they use/integrate with Neutron in some form or another (e.g.
  having 3rd-party, extending-api, integrating-via-plugin-model,
  etc)? Then we could simply consider extending the projects.yaml
  to capture this new concept (for Neutron or any other project)
  once we defined its ontology.
 
  Thoughts?
 
 
  That seems interesting, but given the communities stated goals
  around Big Tent, it seems to me like affiliation or not, adding
  these under the Neutron tent, inside the larger OpenStack Bigger
  Tent, would be a good thing.
 
  Thanks,
  Kyle
 
 
 
  Thanks for clearing some of the questions I raised. I should stress the
  fact that I welcome the idea of finding a more sensible home for these
  projects in light of the big tent developments, but it seems like we're
  still pouring down the foundations. I'd rather get us to a point where
  the landscape is clear, and the dust settled. That would help us make a
  more informed decision compared to the one we can make right now.

 Can you be a bit more specific about what's not clear and would help
 make you feel more informed?


I am not clear on how we make a decision, as to which project belongs or
doesn't to the Neutron 'umbrella', 'tent', 'stadium' or however we end up
calling it :)


 --
 Russell Bryant

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] A big tent home for Neutron backend code

2015-04-23 Thread Armando M.
On 23 April 2015 at 01:49, Thierry Carrez thie...@openstack.org wrote:

 Armando M. wrote:
  Is it sensible to assume that Stackforge is going away entirely at some
  point in the future, and we'll have a single namespace - OpenStack?

 The key difference between Stackforge and OpenStack is governance. Any
 project can be in Stackforge. Projects that are considered OpenStack
 projects are special in two ways:

 1- They need to fit the OpenStack requirements as defined by the TC
 2- They need to place themselves under the oversight of the TC

 In return, they get voting rights to elect the TC itself.

 While most projects in Stackforge actually fit (1) and accept (2), not
 all of them do. It's also not a decision that can be made for them (due
 to (2)), so we can't just migrate them.

  It's my understanding that StackForge projects are bound to the same
  governance model, or am I mistaken?

 Of course they aren't. They don't sign up for anything, and our
 governance model has no sort of control over them.


I have always considered StackForge projects (the vast majority anyway)
projects that have the ultimate desire to be an integral part of the
OpenStack ecosystem, and as such would need to follow the same model of the
other openstack projects (at least before the latest governance changes).
This is what I meant 'by bound to the same governance model', ie. besides
the legalities, following the same rules as any other OpenStack project,
but I can see I may have generated confusion with my point.

Thierry, thanks for the clarification.


 --
 Thierry Carrez (ttx)

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] A big tent home for Neutron backend code

2015-04-23 Thread Armando M.
On 23 April 2015 at 09:58, Russell Bryant rbry...@redhat.com wrote:

 On 04/23/2015 12:14 PM, Armando M. wrote:
 
 
  On 23 April 2015 at 07:32, Russell Bryant rbry...@redhat.com
  mailto:rbry...@redhat.com wrote:
 
  On 04/22/2015 10:33 PM, Armando M. wrote:
  
   Would it make sense to capture these projects as simply
   'affiliated', ie. with a loose relationship to Neutron,
  because
   they use/integrate with Neutron in some form or another
 (e.g.
   having 3rd-party, extending-api,
 integrating-via-plugin-model,
   etc)? Then we could simply consider extending the
  projects.yaml
   to capture this new concept (for Neutron or any other
 project)
   once we defined its ontology.
  
   Thoughts?
  
  
   That seems interesting, but given the communities stated goals
   around Big Tent, it seems to me like affiliation or not, adding
   these under the Neutron tent, inside the larger OpenStack
 Bigger
   Tent, would be a good thing.
  
   Thanks,
   Kyle
  
  
  
   Thanks for clearing some of the questions I raised. I should
  stress the
   fact that I welcome the idea of finding a more sensible home for
 these
   projects in light of the big tent developments, but it seems like
  we're
   still pouring down the foundations. I'd rather get us to a point
 where
   the landscape is clear, and the dust settled. That would help us
  make a
   more informed decision compared to the one we can make right now.
 
  Can you be a bit more specific about what's not clear and would help
  make you feel more informed?
 
 
  I am not clear on how we make a decision, as to which project belongs or
  doesn't to the Neutron 'umbrella', 'tent', 'stadium' or however we end
  up calling it :)

 OK, that's fine.  Figuring that out is the next step if folks agree with
 Neutron as the home for networking-foo repos.  I'm happy to write up a
 strawman proposal for inclusion criteria and a set of expectations
 around responsibilities and communication.


What about the other Neutron related ones that didn't strictly follow the
networking- prefix in the name, would the naming convention be one of the
criteria? I look forward to your proposal.

Thanks,
Armando


 --
 Russell Bryant

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] A big tent home for Neutron backend code

2015-04-23 Thread Armando M.


 I agree with henry here.
 Armando, If we use your analogy with nova that doesn't build and deliver
 KVM, we can say that Neutron doesn't build or deliver OVS. It builds a
 driver and an agent which manage OVS, just like nova which provides a
 driver to manage libvirt/KVM.
 Moreover, external SDN controllers are much more complex than Neutron with
 its reference drivers. I feel like forcing the cloud admin to deploy and
 maintain an external SDN controller would be a terrible experience for him
 if he just needs a simple way manage connectivity between VMs.
 At the end of the day, it might be detrimental for the neutron project.



I don't think that anyone is saying that cloud admins are going to be
forced to deploy and maintain an external SDN controller. There are plenty
of deployment examples where people are just happy with network
virtualization the way Neutron has been providing for years and we should
not regress on that. To me it's mostly a matter of responsibilities of who
develops what, and what that what is :)

The consumption model is totally a different matter.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][QoS] service-plugin or not discussion

2015-04-24 Thread Armando M.
On 24 April 2015 at 01:47, Miguel Angel Ajo Pelayo mangel...@redhat.com
wrote:

 Hi Armando  Salvatore,

 On 23/4/2015, at 9:30, Salvatore Orlando sorla...@nicira.com wrote:



 On 23 April 2015 at 01:30, Armando M. arma...@gmail.com wrote:


 On 22 April 2015 at 06:02, Miguel Angel Ajo Pelayo mangel...@redhat.com
  wrote:


 Hi everybody,

In the latest QoS meeting, one of the topics was a discussion about
 how to implement
 QoS [1] either as in core, or as a service plugin, in, or out-tree.


 It is really promising that after only two meetings the team is already
 split! I cannot wait for the API discussion to start ;)


 We seem to be relatively on the same page about how to model the API, but
 we need yet to loop
 in users/operators who have an interest in QoS to make sure they find it
 usable. [1]




 My apologies if I was unable to join, the meeting clashed with another
 one I was supposed to attend.


 My bad, sorry ;-/




It’s my feeling, and Mathieu’s that it looks more like a core
 feature, as we’re talking of
 port properties that we define at high level, and most plugins (QoS
 capable) may want
 to implement at dataplane/controlplane level, and also that it’s
 something requiring a good
 amount of review.


 Core is a term which is recently being abused in Neutron... However, I
 think you mean that it is a feature fairly entangled with the L2 mechanisms,


 Not only the L2 mechanisms, but the description of ports themselves, in
 the basic cases we’re just defining
 how “small” or “big” your port is.  In the future we could be saying “UDP
 ports 5000-6000” have the highest
 priority on this port, or a minimum bandwidth of 50Mbps…, it’s marked with
 a IPv6 flow label for hi-prio…
 or whatever policy we support.

 that deserves being integrated in what is today the core plugin and in
 the OVS/LB agents. To this aim I think it's good to make a distinction
 between the management plane and the control plane implementation.

 At the management plane you have a few choices:
 - yet another mixin, so that any plugin can add it and quickly support the
 API extension at the mgmt layer. I believe we're fairly certain everybody
 understands mixins are not sustainable anymore and I'm hopeful you are not
 considering this route.


 Are you specifically referring to this on every plugin?

 class Ml2Plugin(db_base_plugin_v2.NeutronDbPluginV2, ---
 dvr_mac_db.DVRDbMixin, ---
 external_net_db.External_net_db_mixin, ---
 sg_db_rpc.SecurityGroupServerRpcMixin,   ---
 agentschedulers_db.DhcpAgentSchedulerDbMixin,  ---
 addr_pair_db.AllowedAddressPairsMixin,  

 I’m quite allergic to mixings, I must admit, but, if it’s not the desired
 way, why don’t we refactor the way we compose plugins !? (yet more
 refactors probably would slow us down, …) but… I feel like we’re pushing to
 overcomplicate the design for a case which is similar to everything else we
 had before (security groups, port security, allowed address pairs).

 It feels wrong to have every similar feature done in a different way, even
 if the current way is not the best one I admit.


This attitude led us to the pain we are in now, I think we can no longer
afford to keep doing that. Bold goals require bold actions. If we don't
step back and figure out a way to extend the existing components without
hijacking the current codebase, it would be very difficult to give this
effort the priority it deserves.

 - a service plugin - as suggested by some proposers. The service plugin is
 fairly easy to implement, and now Armando has provided you with a mechanism
 to register for callbacks for events in other plugins. This should make the
 implementation fairly straightforward. This also enables other plugins to
 implement QoS support.
 - a ML2 mechanism driver + a ML2 extension driver. From an architectural
 perspective this would be the preferred solution for a ML2 implementation,
 but at the same time will not provide management level support for non-ML2
 plugins.


 I’m a bit lost of why a a plugin (apart from ML2) could not just declare
 that it’s implementing the extension,  or it’s just that the only way we
 have to do it right now it’s mixings? why would ML2 avoid it?.






In the other hand Irena and Sean were more concerned about having a
 good separation
 of concerns (I agree actually with that part), and being able to do
 quicker iterations on a
 separate stackforge repo.


 Perhaps we're trying to address the issue at the wrong time. Once a
 reasonable agreement has been reached on the data model, and the API,
 whether we're going with a service plugin or core etc should be an
 implementation detail. I think the crux of the matter is the data plane
 integration. From a management and control standpoint it should be fairly
 trivial to expose/implement the API and business logic via a service plugin
 and, and some of you suggested, integrate with the core

Re: [openstack-dev] [Neutron] A big tent home for Neutron backend code

2015-04-24 Thread Armando M.

 If we've reached the point where we're arguing about naming, dos this mean
 we've built consensus on the yes, it makes sense for these to live under
 Neutron argument?



I think we are in agreement that these projects need to find a more obvious
home, they feel somewhat orphan otherwise. Most of them are extensions or
plugins to Neutron and since they cannot stand alone, it makes sense to
have them associated with it. As far as I am concerned I think the matter
is about the inclusion methodology as well as the timing.

Cheers,
Armando
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   3   4   5   6   7   >