Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-08 Thread Armando M.
On 8 August 2014 10:56, Kevin Benton  wrote:

> There is an enforcement component to the group policy that allows you to
> use the current APIs and it's the reason that group policy is integrated
> into the neutron project. If someone uses the current APIs, the group
> policy plugin will make sure they don't violate any policy constraints
> before passing the request into the regular core/service plugins.
>

This is the statement that makes me trip over, and I don't understand why
GBP and Neutron Core need to be 'integrated' together as they have. Policy
decision points can be decentralized from the system under scrutiny, we
don't need to have one giant monolithic system that does everything; it's
an architectural decision that would make difficult to achieve
composability and all the other good -ilities of software systems.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-08 Thread Armando M.
>
> Adding the GBP extension to Neutron does not change the nature of the
> software architecture of Neutron making it more or less monolithic.


I agree with this statement...partially: the way GBP was developed is in
accordance to the same principles and architectural choices made for the
service plugins and frameworks we have right now, and yes it does not make
Neutron more monolithic but certainly not less. These same very principles
have unveiled limitations we have realized need to be addressed, according
to Neutron's busy agenda. That said, if I were to be given the opportunity
to revise some architectural decisions during the new groundbreaking work
(regardless of the nature), I would.

For instance, I hate that the service plugins live in the same address
space of Neutron Server, I hate that I have one Neutron Server that does
L2, L3, IPAM, ...; we could break it down and make sure every entity can
have its own lifecycle: we can compose and integrate more easily if we did.
Isn't that what years of middleware and distributed systems taught us?

I suggested in the past that GBP would best integrate to Neutron via a
stable and RESTful interface, just like any other OpenStack project does. I
have been unable to be convinced otherwise, and I would love to be able to
change my opinion.


> It
> fulfills a gap that is currently present in the Neutron API, namely -
> to complement the current imperative abstractions with a app
> -developer/deployer friendly declarative abstraction [1]. To
> reiterate, it has been proposed as an “extension”, and not a
> replacement of the core abstractions or the way those are consumed.

If
> this is understood and interpreted correctly, I doubt that there
> should be reason for concern.
>
>
I never said that GBP did (mean to replace the core abstractions): I am
talking purely architecture and system integration. Not sure if this
statement is directed to my comment.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-08 Thread Armando M.
On 8 August 2014 14:55, Kevin Benton  wrote:

> >This is the statement that makes me trip over,
>
> I don't know what that means. Does it mean that you are so incredibly
> shocked by the stupidity of that statement that you fall down? Or does it
> mean something else?
>

Why would you think that? I trip over the obstacle that prevents me from
understanding! If at all, I would blame my stupidity, not the one of the
statement :)


>
> >Policy decision points can be decentralized from the system under
> scrutiny,
>
> Unfortunately they can't in this case where some policy needs to be
> enforced between plugins. If we could refactor the communication between
> service and core plugins to use the API as well, then we probably could
> build this as a middleware.
>

Assumed I agreed they couldn't, which I find hard to believe, instead of
going after the better approach, we stick with the less optimal one?


>
> On Fri, Aug 8, 2014 at 1:45 PM, Armando M.  wrote:
>
>> On 8 August 2014 10:56, Kevin Benton  wrote:
>>
>>> There is an enforcement component to the group policy that allows you to
>>> use the current APIs and it's the reason that group policy is integrated
>>> into the neutron project. If someone uses the current APIs, the group
>>> policy plugin will make sure they don't violate any policy constraints
>>> before passing the request into the regular core/service plugins.
>>>
>>
>> This is the statement that makes me trip over, and I don't understand why
>> GBP and Neutron Core need to be 'integrated' together as they have. Policy
>> decision points can be decentralized from the system under scrutiny, we
>> don't need to have one giant monolithic system that does everything; it's
>> an architectural decision that would make difficult to achieve
>> composability and all the other good -ilities of software systems.
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Kevin Benton
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-08 Thread Armando M.
>  One advantage of the service plugin is that one can leverage the neutron
> common framework such as Keystone authentication where common scoping is
> done. It would be important in the policy type of framework to have such
> scoping
>

The framework you're referring to is common and already reusable, it's not
a prerogative of Neutron.


>
> While the service plugin has scalability issues as pointed above that it
> resides in neutron server, it is however stable and user configurable and a
> lot of common code is executed for networking services.
>

This is what static or dynamic libraries are for and reused for; I can have
a building block and reuse it many times the way I see fit keeping my
components' lifecycles separate.


> So while we make the next generation services framework more distributed
> and scalable, it is ok to do it under the current framework especially
> since it has provision for the user to opt in when needed.
>

A next generation services framework is not a prerequisite to integrating
two OpenStack projects via REST APIs. I don't see how we would associate
the two concepts together.


>
>
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-08 Thread Armando M.
>
>
> On Fri, Aug 8, 2014 at 5:38 PM, Armando M.  wrote:
>
>>
>>
>>>   One advantage of the service plugin is that one can leverage the
>>> neutron common framework such as Keystone authentication where common
>>> scoping is done. It would be important in the policy type of framework to
>>> have such scoping
>>>
>>
>> The framework you're referring to is common and already reusable, it's
>> not a prerogative of Neutron.
>>
>
>  Are you suggesting that Service Plugins, L3, IPAM etc become individual
> endpoints, resulting in redundant authentication round-trips for each of
> the components.
>
> Wouldn't this result in degraded performance and potential consistency
> issues?
>

The endpoint - in the OpenStack lingo - that exposes the API abstractions
(concepts and operations) can be, logically and physically, different from
the worker that implements these abstractions; authentication is orthogonal
to this and I am not suggesting what you mention.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Is network ordering of vNICs guaranteed?

2014-08-09 Thread Armando M.
On 9 August 2014 10:16, Jay Pipes  wrote:

> Paul, does this friend of a friend have a reproduceable test script for
> this?
>
> Thanks!
> -jay
>
>
We would also need to know the OpenStack release where this issue manifest
itself. A number of bugs have been raised in the past around this type of
issue, and the last fix I recall is this one:

https://bugs.launchpad.net/nova/+bug/1300325

It's possible that this might have regressed, though.

Cheers,
Armando
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Which changes need accompanying bugs?

2014-08-13 Thread Armando M.
I am gonna add more color to this story by posting my replies on review [1]:

Hi Angus,

You touched on a number of points. Let me try to give you an answer to all
of them.

>> (I'll create a bug report too. I still haven't worked out which class of
changes need an accompanying bug report and which don't.)

The long story can be read below:

https://wiki.openstack.org/wiki/BugFilingRecommendations

https://wiki.openstack.org/wiki/GitCommitMessages

IMO, there's a grey area for some of the issues you found, but when I am
faced with a bug, I tend to answer myself? Would a bug report be useful to
someone else? The author of the code? A consumer of the code? Not everyone
follow the core review system all the time, whereas Launchpad is pretty
much the tool everyone uses to stay abreast with the OpenStack release
cycle. Obviously if you're fixing a grammar nit, or filing a cosmetic
change that has no functional impact then I warrant the lack of a test, but
in this case you're fixing a genuine error: let's say we want to backport
this to icehouse, how else would we make the release manager of that?
He/she is looking at Launchpad.

>> I can add a unittest for this particular code path, but it would only
check this particular short segment of code, would need to be maintained as
the code changes, and wouldn't catch another occurrence somewhere else.
This seems an unsatisfying return on the additional work :(

You're looking at this from the wrong perspective. This is not about
ensuring that other code paths are valid, but that this code path stays
valid over time, ensuring that the code path is exercised and that no other
regression of any kind creep in. The reason why this error was introduced
in the first place is because the code wasn't tested when it should have.
If you don't get that this mechanical effort of fixing errors by static
analysis is kind of ineffective, which leads me to the last point

>> I actually found this via static analysis using pylint - and my question
is: should I create some sort of pylint unittest that tries to catch this
class of problem across the entire codebase? [...]

I value what you're doing, however I would see even more value if we
prevented these types of errors from occurring in the first place via
automation. You run pylint today, but what about tomorrow, or a week from
now? Are you gonna be filing pylint fixes for ever? We might be better off
automating the check and catch these types of errors before they land in
the tree. This means that the work you are doing it two-pronged: a)
automate the detection of some failures by hooking this into tox.ini via
HACKING/pep8 or equivalent mechanism and b) file all the fixes that require
these validation tests to pass; c) everyone is happy, or at least they
should be.

I'd welcome to explore a better strategy to ensure a better quality of the
code base, without some degree of automation, nothing will stop these
conversation from happening again.

Cheers,

Armando

[1] https://review.openstack.org/#/c/113777/


On 13 August 2014 03:02, Ihar Hrachyshka  wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA512
>
> On 13/08/14 09:28, Angus Lees wrote:
> > I'm doing various small cleanup changes as I explore the neutron
> > codebase. Some of these cleanups are to fix actual bugs discovered
> > in the code.  Almost all of them are tiny and "obviously correct".
> >
> > A recurring reviewer comment is that the change should have had an
> >  accompanying bug report and that they would rather that change was
> > not submitted without one (or at least, they've -1'ed my change).
> >
> > I often didn't discover these issues by encountering an actual
> > production issue so I'm unsure what to include in the bug report
> > other than basically a copy of the change description.  I also
> > haven't worked out the pattern yet of which changes should have a
> > bug and which don't need one.
> >
> > There's a section describing blueprints in NeutronDevelopment but
> > nothing on bugs.  It would be great if someone who understands the
> > nuances here could add some words on when to file bugs: Which type
> > of changes should have accompanying bug reports? What is the
> > purpose of that bug, and what should it contain?
> >
>
> It was discussed before at:
> http://lists.openstack.org/pipermail/openstack-dev/2014-May/035789.html
>
> /Ihar
> -BEGIN PGP SIGNATURE-
> Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
>
> iQEcBAEBCgAGBQJT6zfOAAoJEC5aWaUY1u570wQIAMpoXIK/p5invp+GW0aMMUK0
> C/MR6WIJ83e6e2tOVUrxheK6bncVvidOI4EWGW1xzP1sg9q+8Hs1TNyKHXhJAb+I
> c435MMHWsDwj6p1OeDxHnSOVMthcGH96sgRa1+CIk6+oktDF3IMmiOPRkxdpqWCZ
> 7TkV75mryehrTNwAkVPfpWG3OhWO44d5lLnJFCIMCuOw2NHzyLIOoGQAlWNQpy4V
> a869s00WO37GEed6A5Zizc9K/05/6kpDIQVim37tw91JcZ69VelUlZ1THx+RTd33
> 92r87APm3fC/LioKN3fq1UUo2c94Vzl3gYPFVl8ZateQNMKB7ONMBePOfWR9H1k=
> =wCJQ
> -END PGP SIGNATURE-
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.ope

Re: [openstack-dev] [neutron] Which changes need accompanying bugs?

2014-08-13 Thread Armando M.
>
>
> > At the moment pylint on neutron is *very* noisy, and I've been looking
> through
> > the reported issues by hand to get a feel for what's involved.  Enabling
> > pylint is a separate discussion that I'd like to have - in some other
> thread.
> >
>
> I think enabling pylint check was discussed at the start of the
> project, but for the reasons you mention, it was not considered.


Yes, noise can be a problem, but we should be able to adjust it to a level
we're comfortable with, at least for catching the dangerous violations.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Adding GateFailureFix tag to commit messages

2014-08-21 Thread Armando M.
Hi folks,

According to [1], we have ways to introduce external references to commit
messages.

These are useful to mark certain patches and their relevance in the context
of documentation, upgrades, etc.

I was wondering if it would be useful considering the addition of another
tag:

GateFailureFix

The objective of this tag, mainly for consumption by the review team, would
be to make sure that some patches get more attention than others, as they
affect the velocity of how certain critical issues are addressed (and gate
failures affect everyone).

As for machine consumption, I know that some projects use the
'gate-failure' tag to categorize LP bugs that affect the gate. The use of a
GateFailureFix tag in the commit message could make the tagging automatic,
so that we can keep a log of what all the gate failures are over time.

Not sure if this was proposed before, and I welcome any input on the matter.

Cheers,
Armando

[1] -
https://wiki.openstack.org/wiki/GitCommitMessages#Including_external_references
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Adding GateFailureFix tag to commit messages

2014-08-21 Thread Armando M.
>
> A concern with this approach is it's pretty arbitrary, and not always
> clear which bugs are being addressed and how severe they are.
>

Well, establishing whether LP reports are actual bugs and assigning the
severity isn't what triaging is for?


>
> An idea that came up in the Infra/QA meetup was to build a custom review
> dashboard based on the bug list in elastic recheck. That would also
> encourage people to categorize this bugs through that system, and I
> think provide a virtuous circle around identifying the issues at hand.
>

Having elastic recheck means that the bug has already being vetted, that a
fingerprint for the bug has been filed etc. Granted some gate failures may
be long lasting, but I was hoping this mechanism would target those
failures that are fixed fairly quickly.


>
> I think Joe Gordon had a first pass at this, but I'd be more interested
> in doing it this way because it means the patch author fixing a bug just
> needs to know they are fixing the bug. Whether or not it's currently a
> gate issue would be decided not by the commit message (static) but by
> our system that understands what are the gate issues *right now* (dynamic).
>

Gate failures are not exactly low-hanging fruits so it's likely that the
author of the patch already knows that he's fixing a severe issue. The tag
would be an alert for other reviewers so that they can give the patch more
attention. As a core reviewer, I welcome any proposal that wouldn't cause a
reviewer to switch across yet another dashboard, as we already have plenty
(but maybe that's just me).

Having said that, it sounds like you guys have already thought about this,
so it makes sense to discard this idea.

Thanks,
Armando


> -Sean
>
> --
> Sean Dague
> http://dague.net
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-10 Thread Armando M.
Hi,

I devoured this thread, so much it was interesting and full of
insights. It's not news that we've been pondering about this in the
Neutron project for the past and existing cycle or so.

Likely, this effort is going to take more than two cycles, and would
require a very focused team of people working closely together to
address this (most likely the core team members plus a few other folks
interested).

One question I was unable to get a clear answer was: what happens to
existing/new bug fixes and features? Would the codebase go in lockdown
mode, i.e. not accepting anything else that isn't specifically
targeting this objective? Just using NFV as an example, I can't
imagine having changes supporting NFV still being reviewed and merged
while this process takes place...it would be like shooting at a moving
target! If we did go into lockdown mode, what happens to all the
corporate-backed agendas that aim at delivering new value to
OpenStack?

Should we relax what goes into the stable branches, i.e. considering
having  a Juno on steroids six months from now that includes some of
the features/fixes that didn't land in time before this process kicks
off?

I like the end goal of having a leaner Nova (or Neutron for that
matter), it's the transition that scares me a bit!

Armando

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-11 Thread Armando M.
On 10 September 2014 22:23, Russell Bryant  wrote:
> On 09/10/2014 10:35 PM, Armando M. wrote:
>> Hi,
>>
>> I devoured this thread, so much it was interesting and full of
>> insights. It's not news that we've been pondering about this in the
>> Neutron project for the past and existing cycle or so.
>>
>> Likely, this effort is going to take more than two cycles, and would
>> require a very focused team of people working closely together to
>> address this (most likely the core team members plus a few other folks
>> interested).
>>
>> One question I was unable to get a clear answer was: what happens to
>> existing/new bug fixes and features? Would the codebase go in lockdown
>> mode, i.e. not accepting anything else that isn't specifically
>> targeting this objective? Just using NFV as an example, I can't
>> imagine having changes supporting NFV still being reviewed and merged
>> while this process takes place...it would be like shooting at a moving
>> target! If we did go into lockdown mode, what happens to all the
>> corporate-backed agendas that aim at delivering new value to
>> OpenStack?
>
> Yes, I imagine a temporary slow-down on new feature development makes
> sense.  However, I don't think it has to be across the board.  Things
> should be considered case by case, like usual.

Aren't we trying to move away from the 'usual'? Considering things on
a case by case basis still requires review cycles, etc. Keeping the
status quo would mean prolonging the exact pain we're trying to
address.

>
> For example, a feature that requires invasive changes to the virt driver
> interface might have a harder time during this transition, but a more
> straight forward feature isolated to the internals of a driver might be
> fine to let through.  Like anything else, we have to weight cost/benefit.
>
>> Should we relax what goes into the stable branches, i.e. considering
>> having  a Juno on steroids six months from now that includes some of
>> the features/fixes that didn't land in time before this process kicks
>> off?
>
> No ... maybe I misunderstand the suggestion, but I definitely would not
> be in favor of a Juno branch with features that haven't landed in master.
>

I was thinking of the bold move of having Kilo (and beyond)
developments solely focused on this transition. Until this is
complete, nothing would be merged that is not directly pertaining this
objective. At the same time, we'd still want pending features/fixes
(and possibly new features) to land somewhere stable-ish. I fear that
doing so in master, while stuff is churned up and moved out into
external repos, will makes this whole task harder than it already is.

Thanks,
Armando

> --
> Russell Bryant
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] DVR Tunnel Design Question

2014-09-17 Thread Armando M.
VLAN is on the radar, vxlan/gre was done to start with.

I believe Vivek mentioned the rationale in some other thread. The gist
of it below:

In the current architecture, we use a unique DVR MAC per compute node
to forward DVR Routed traffic directly to destination compute node.
The DVR routed traffic from the source compute node will carry
'destination VMs underlay VLAN' in the frame, but the Source Mac in
that same frame will be the DVR Unique MAC. So, same DVR Unique Mac is
used for potentially a number of overlay network VMs that would exist
on that same source compute node.

The underlay infrastructure switches will see the same DVR Unique MAC
being associated with different VLANs on incoming frames, and so this
would result in VLAN Thrashing on the switches in the physical cloud
infrastructure. Since tunneling protocols carry the entire DVR routed
inner frames as tunnel payloads, there is no thrashing effect on
underlay switches.

There will still be thrashing effect on endpoints on CNs themselves,
when they try to learn that association between inner frame source MAC
and the TEP port on which the tunneled frame is received. But that we
have addressed in L2 Agent by having a 'DVR Learning Blocker' table,
which ensures that learning for DVR routed packets alone is
side-stepped.

As a result, VLAN was not promoted as a supported underlay for the
initial DVR architecture.

Cheers,
Armando

On 16 September 2014 20:35, 龚永生  wrote:
> I think the VLAN should also be supported later.  The tunnel should not be
> the prerequisite for the DVR feature.
>
>
> -- Original --
> From:  "Steve Wormley";
> Date:  Wed, Sep 17, 2014 10:29 AM
> To:  "openstack-dev";
> Subject:  [openstack-dev] [neutron] DVR Tunnel Design Question
>
> In our environment using VXLAN/GRE would make it difficult to keep some of
> the features we currently offer our customers. So for a while now I've been
> looking at the DVR code, blueprints and Google drive docs and other than it
> being the way the code was written I can't find anything indicating why a
> Tunnel/Overlay network is required for DVR or what problem it was solving.
>
> Basically I'm just trying to see if I missed anything as I look into doing a
> VLAN/OVS implementation.
>
> Thanks,
> -Steve Wormley
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Stop logging non-exceptional conditions as ERROR

2013-11-28 Thread Armando M.
I have been doing so in the number of patches I pushed to reduce error
traces due to the communication between server and dhcp agent.

I wanted to take care of the l3 agent too, but one thing I noticed is
that I couldn't find a log for it (I mean on the artifacts that are
published at job's completion). Actually, I couldn't find an l3 agent
started by devstack either.

Am I missing something?

On 27 November 2013 09:08, Salvatore Orlando  wrote:
> Thanks Maru,
>
> This is something my team had on the backlog for a while.
> I will push some patches to contribute towards this effort in the next few
> days.
>
> Let me know if you're already thinking of targeting the completion of this
> job for a specific deadline.
>
> Salvatore
>
>
> On 27 November 2013 17:50, Maru Newby  wrote:
>>
>> Just a heads up, the console output for neutron gate jobs is about to get
>> a lot noisier.  Any log output that contains 'ERROR' is going to be dumped
>> into the console output so that we can identify and eliminate unnecessary
>> error logging.  Once we've cleaned things up, the presence of unexpected
>> (non-whitelisted) error output can be used to fail jobs, as per the
>> following Tempest blueprint:
>>
>> https://blueprints.launchpad.net/tempest/+spec/fail-gate-on-log-errors
>>
>> I've filed a related Neutron blueprint for eliminating the unnecessary
>> error logging:
>>
>>
>> https://blueprints.launchpad.net/neutron/+spec/log-only-exceptional-conditions-as-error
>>
>> I'm looking for volunteers to help with this effort, please reply in this
>> thread if you're willing to assist.
>>
>> Thanks,
>>
>>
>> Maru
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Testr] Brand new checkout of Neutron... getting insane unit test run results

2014-01-02 Thread Armando M.
To be fair, neutron cores turned down reviews [1][2][3] for fear that
the patch would break Hyper-V support for Neutron.

Whether it's been hinted (erroneously) that this was a packaging issue
is irrelevant for the sake of this discussion, and I suggested (when I
turned down review [3]) if we could make the requirement dependent on
the distro, so that the problem could be solved once and for all (and
without causing any side effects).

Just adding the pyudev dependency to requirements.txt it's not
acceptable for the above mentioned reason. I am sorry if people keep
abandoning the issue without taking the bull by the horns.

[1] https://review.openstack.org/#/c/64333/
[2] https://review.openstack.org/#/c/55966/
[3] https://review.openstack.org/#/c/58884/

Cheers,
Armando

On 2 January 2014 18:03, Jay Pipes  wrote:
> On 01/01/2014 10:56 PM, Clark Boylan wrote:
>>
>> On Wed, Jan 1, 2014 at 7:33 PM, 黎林果  wrote:
>>>
>>> I have met this problem too.The units can't be run.
>>> The end info as:
>>>
>>> Ran 0 tests in 0.673s
>>>
>>> OK
>>> cp: cannot stat `.testrepository/-1': No such file or directory
>>>
>>> 2013/12/28 Jay Pipes :

 On 12/27/2013 11:11 PM, Robert Collins wrote:
>
>
> I'm really sorry about the horrid UI - we're in the middle of fixing
> the plumbing to report this and support things like tempest better -
> from the bottom up. The subunit listing -> testr reporting of listing
> errors is fixed on the subunit side, but not on the the testr side
> yet.
>
> If you look at the end of the error:
>
> \rimport
>
> errors4neutron.tests.unit.linuxbridge.test_lb_neutron_agent\x85\xc5\x1a\\',
> stderr=None
> error: testr failed (3)
>
> You can see this^
>
> which translates as
> import errors
> neutron.tests.unit.linuxbridge.test_lb_neutron_agent
>
> so
>
> neutron/tests/unit/linuxbridge/test_lb_neutron_agent.py
>
> is failing to import.



 Phew, thanks Rob! I was a bit stumped there :) I have identified the
 import
 issue (this is on a fresh checkout of Neutron, BTW, so I'm a little
 confused
 how this made it through the gate...

 (.venv)jpipes@uberbox:~/repos/openstack/neutron$ python
 Python 2.7.4 (default, Sep 26 2013, 03:20:26)
 [GCC 4.7.3] on linux2
 Type "help", "copyright", "credits" or "license" for more information.
>>>
>>> import neutron.tests.unit.linuxbridge.test_lb_neutron_agent

 Traceback (most recent call last):
File "", line 1, in 
File "neutron/tests/unit/linuxbridge/test_lb_neutron_agent.py", line
 29,
 in 
  from neutron.plugins.linuxbridge.agent import
 linuxbridge_neutron_agent
File
 "neutron/plugins/linuxbridge/agent/linuxbridge_neutron_agent.py",
 line 33, in 
  import pyudev
 ImportError: No module named pyudev

 Looks like pyudev needs to be added to requirements.txt... I've filed a
 bug:

 https://bugs.launchpad.net/neutron/+bug/1264687

 with a patch here:

 https://review.openstack.org/#/c/64333/

 Thanks again, much appreciated!
 -jay


> On 28 December 2013 13:41, Jay Pipes  wrote:
>>
>>
>> Please see:
>>
>> http://paste.openstack.org/show/57627/
>>
>> This is on a brand new git clone of neutron and then running
>> ./run_tests.sh
>> -V (FWIW, the same behavior occurs when running with tox -epy27 as
>> well...)
>>
>> I have zero idea what to do...any help would be appreciated!
>>
>> It's almost like the subunit stream is being dumped as-is into the
>> console.
>>
>> Best!
>> -jay
>
>
>
>
>


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> It looks like the problem is that there is a dependency on pyudev
>> which only works properly on Linux. The neutron setup_hook does
>> properly install pyudev on Linux (explains why the tests run in the
>> gate), but would not work properly on windows or OS X. I assume folks
>> are trying to run the tests on not Linux?
>
>
> Nope, the problem occurs on Linux. I was using Ubuntu 13.04.
>
> I abandoned my patch after some neutron-cores said it wasn't correct to put
> Linux-only dependencies in requirements.txt and said "it was a packaging
> issue".
>
> The problem is that requirements.txt is *all about packaging issues*. Until
> we have some way of indicating "this dependency is only for
> Linux/Windows/whatever" in our requirements.txt files, this is going to be a
> pain in the butt.
>
>
>> Ne

Re: [openstack-dev] [Neutron] Nominate Oleg Bondarev for Core

2014-02-16 Thread Armando M.
+1
On Feb 13, 2014 5:52 PM, "Nachi Ueno"  wrote:

> +1
>
> 2014年2月12日水曜日、Mayur Patilさんは書きました:
>
>> +1
>>
>> *--*
>> *Cheers,*
>> *Mayur*
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Fixes for the alembic migration (sqlite + postgress) aren't being reviewed

2014-02-20 Thread Armando M.
Thomas,

I feel your frustration, however before complaining please do follow
the actual chain of events.

Patch [1]: I asked a question which I never received an answer to.
Patch [2]: I did put a -1, but I have nothing against this patch per
se. This was only been recently abandoned and my -1 lied primarily to
give patch [1] the opportunity to be resumed.

No action on a negative review means automatic expiration, if you lose
interest in something you care about whose fault is that?

A.

[1] = https://review.openstack.org/#/c/52757
[2] = https://review.openstack.org/#/c/68611

On 19 February 2014 06:28, Thomas Goirand  wrote:
> Hi,
>
> I've seen this one:
> https://review.openstack.org/#/c/68611/
>
> which is suppose to fix something for Postgress. This is funny, because
> I was doing the exact same patch for fixing it for SQLite. Though this
> was before the last summit in HK.
>
> Since then, I just gave up on having my Debian specific patch [1] being
> upstreamed. No review, despite my insistence. Mark, on the HK summit,
> told me that it was pending discussion about what would be the policy
> for SQLite.
>
> Guys, this is disappointing. That's the 2nd time the same patch is being
> blocked, with no explanations.
>
> Could 2 core reviewers have a *serious* look at this patch, and explain
> why it's not ok for it to be approved? If nobody says why, then could
> this be approved, so we can move on?
>
> Cheers,
>
> Thomas Goirand (zigo)
>
> [1]
> http://anonscm.debian.org/gitweb/?p=openstack/neutron.git;a=blob;f=debian/patches/fix-alembic-migration-with-sqlite3.patch;h=9108b45aaaf683e49b15338bacd813e50e9f563d;hb=b44e96d9e1d750e35513d63877eb05f167a175d8
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Fixes for the alembic migration (sqlite + postgress) aren't being reviewed

2014-02-20 Thread Armando M.
On 20 February 2014 14:13, Vincent Untz  wrote:
> Le jeudi 20 février 2014, à 12:02 -0800, Armando M. a écrit :
>> Thomas,
>>
>> I feel your frustration, however before complaining please do follow
>> the actual chain of events.
>>
>> Patch [1]: I asked a question which I never received an answer to.
>> Patch [2]: I did put a -1, but I have nothing against this patch per
>> se. This was only been recently abandoned and my -1 lied primarily to
>> give patch [1] the opportunity to be resumed.
>
> Well, I did reply to your comment on the same day, so I'm not sure what
> else I, as submitter, could have done more to address your comment and
> convince you to change the -1 to +1.
>
>> No action on a negative review means automatic expiration, if you lose
>> interest in something you care about whose fault is that?
>
> I beg to disagree. If we let patches go to automatic expiration, then we
> as a project will just lose contributors. I don't think we should accept
> that as a fatality.

The power to restore a change is the hands of the contributor, not the reviewer.

Issues have different priorities and people shouldn't feel singled out
if their changes lose steam. The best course of action is to keep
sticking by them until the light at the end of the tunnel is in sight
:)

That said, I think one of issue that affect the delay of approvals of
patches dealing with DB migrations (that apply across multiple Neutron
releases) is the lack of a stable CI job (like Grenade) that validate
them and relieve the core reviewer of some burden of going through the
patch, the testbed etc.

This is coming though, we just need to be more patient, venting
frustration doesn't fix code!

A.

>
> I just restored the patch, btw :-)
>
> Vincent
>
>> A.
>>
>> [1] = https://review.openstack.org/#/c/52757
>> [2] = https://review.openstack.org/#/c/68611
>>
>> On 19 February 2014 06:28, Thomas Goirand  wrote:
>> > Hi,
>> >
>> > I've seen this one:
>> > https://review.openstack.org/#/c/68611/
>> >
>> > which is suppose to fix something for Postgress. This is funny, because
>> > I was doing the exact same patch for fixing it for SQLite. Though this
>> > was before the last summit in HK.
>> >
>> > Since then, I just gave up on having my Debian specific patch [1] being
>> > upstreamed. No review, despite my insistence. Mark, on the HK summit,
>> > told me that it was pending discussion about what would be the policy
>> > for SQLite.
>> >
>> > Guys, this is disappointing. That's the 2nd time the same patch is being
>> > blocked, with no explanations.
>> >
>> > Could 2 core reviewers have a *serious* look at this patch, and explain
>> > why it's not ok for it to be approved? If nobody says why, then could
>> > this be approved, so we can move on?
>> >
>> > Cheers,
>> >
>> > Thomas Goirand (zigo)
>> >
>> > [1]
>> > http://anonscm.debian.org/gitweb/?p=openstack/neutron.git;a=blob;f=debian/patches/fix-alembic-migration-with-sqlite3.patch;h=9108b45aaaf683e49b15338bacd813e50e9f563d;hb=b44e96d9e1d750e35513d63877eb05f167a175d8
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> --
> Les gens heureux ne sont pas pressés.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] False Positive testing for 3rd party CI

2014-02-21 Thread Armando M.
Nice one!

On 21 February 2014 11:22, Aaron Rosen  wrote:
> This should fix the false positive for brocade:
> https://review.openstack.org/#/c/75486/
>
> Aaron
>
>
> On Fri, Feb 21, 2014 at 10:34 AM, Aaron Rosen  wrote:
>>
>> Hi,
>>
>> Yesterday, I pushed a patch to review and was surprised that several of
>> the third party CI systems reported back that the patch-set worked where it
>> definitely shouldn't have. Anyways, I tested out my theory a little more and
>> it turns out a few of the 3rd party CI systems for neutron are just
>> returning  SUCCESS even if the patch set didn't run successfully
>> (https://review.openstack.org/#/c/75304/).
>>
>> Here's a short summery of what I found.
>>
>> Hyper-V CI -- This seems like an easy fix as it's posting "build
>> succeeded" but also puts to the side "test run failed". Would probably be a
>> good idea to remove the "build succeeded" message to avoid any confusion.
>>
>>
>> Brocade CI - From the log files it posts it shows that it tries to apply
>> my patch but fails:
>>
>> 2014-02-20 20:23:48 + cd /opt/stack/neutron
>> 2014-02-20 20:23:48 + git fetch
>> https://review.openstack.org/openstack/neutron.git refs/changes/04/75304/1
>> 2014-02-20 20:24:00 From https://review.openstack.org/openstack/neutron
>> 2014-02-20 20:24:00  * branchrefs/changes/04/75304/1 ->
>> FETCH_HEAD
>> 2014-02-20 20:24:00 + git checkout FETCH_HEAD
>> 2014-02-20 20:24:00 error: Your local changes to the following files would
>> be overwritten by checkout:
>> 2014-02-20 20:24:00  etc/neutron/plugins/ml2/ml2_conf_brocade.ini
>> 2014-02-20 20:24:00
>>  neutron/plugins/ml2/drivers/brocade/mechanism_brocade.py
>> 2014-02-20 20:24:00 Please, commit your changes or stash them before you
>> can switch branches.
>> 2014-02-20 20:24:00 Aborting
>> 2014-02-20 20:24:00 + cd /opt/stack/neutron
>>
>> but still continues running (without my patchset) and reports success. --
>> This actually looks like a devstack bug  (i'll check it out).
>>
>> PLUMgrid CI - Seems to always vote +1 without a failure
>> (https://review.openstack.org/#/dashboard/10117) though the logs are private
>> so we can't really tell whats going on.
>>
>> I was thinking it might be worth while or helpful to have a job that tests
>> that CI is actually fails when we expect it to.
>>
>> Best,
>>
>> Aaron
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L2 Agent][Debt] Bootstrapping an L2 agent debt repayment task force

2014-11-18 Thread Armando M.
Hi Carl,

Thanks for kicking this off. I am also willing to help as a core reviewer
of blueprints and code
submissions only.

As for the ML2 agent, we all know that for historic reasons Neutron has
grown to be not only a networking orchestration project but also a
reference implementation that is resembling what some might call an SDN
controller.

I think that most of the Neutron folks realize that we need to move away
from this model and rely on a strong open source SDN alternative; for these
reasons, I don't think that pursuing an ML2 agent would be a path we should
go down to anymore. It's time and energy that could be more effectively
spent elsewhere, especially on the refactoring. Now if the refactoring
effort ends up being labelled ML2 Agent, I would be okay with it, but my
gut feeling tells me that any attempt at consolidating code to embrace more
than one agent logic at once is gonna derail the major goal of paying down
the so called agent debt.

My 2c
Armando
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][neutron] Proposal to split Neutron into separate repositories

2014-11-18 Thread Armando M.
Mark, Kyle,

What is the strategy for tracking the progress and all the details about
this initiative? Blueprint spec, wiki page, or something else?

One thing I personally found useful about the spec approach adopted in [1],
was that we could quickly and effectively incorporate community feedback;
having said that I am not sure that the same approach makes sense here,
hence the question.

Also, what happens for experimental efforts that are neither L2-3 nor L4-7
(e.g. TaaS or NFV related ones?), but they may still benefit from this
decomposition (as it promotes better separation of responsibilities)? Where
would they live? I am not sure we made any particular progress of the
incubator project idea that was floated a while back.

Cheers,
Armando

[1] https://review.openstack.org/#/c/134680/

On 18 November 2014 15:32, Doug Wiegley  wrote:

>  Hi,
>
>  > so the specs repository would continue to be shared during the Kilo
> cycle.
>
>  One of the reasons to split is that these two teams have different
> priorities and velocities.  Wouldn’t that be easier to track/manage as
> separate launchpad projects and specs repos, irrespective of who is
> approving them?
>
>  Thanks,
> doug
>
>
>
>  On Nov 18, 2014, at 10:31 PM, Mark McClain  wrote:
>
>  All-
>
> Over the last several months, the members of the Networking Program have
> been discussing ways to improve the management of our program.  When the
> Quantum project was initially launched, we envisioned a combined service
> that included all things network related.  This vision served us well in
> the early days as the team mostly focused on building out layers 2 and 3;
> however, we’ve run into growth challenges as the project started building
> out layers 4 through 7.  Initially, we thought that development would float
> across all layers of the networking stack, but the reality is that the
> development concentrates around either layer 2 and 3 or layers 4 through
> 7.  In the last few cycles, we’ve also discovered that these concentrations
> have different velocities and a single core team forces one to match the
> other to the detriment of the one forced to slow down.
>
> Going forward we want to divide the Neutron repository into two separate
> repositories lead by a common Networking PTL.  The current mission of the
> program will remain unchanged [1].  The split would be as follows:
>
> Neutron (Layer 2 and 3)
> - Provides REST service and technology agnostic abstractions for layer 2
> and layer 3 services.
>
> Neutron Advanced Services Library (Layers 4 through 7)
> - A python library which is co-released with Neutron
> - The advance service library provides controllers that can be configured
> to manage the abstractions for layer 4 through 7 services.
>
> Mechanics of the split:
> - Both repositories are members of the same program, so the specs
> repository would continue to be shared during the Kilo cycle.  The PTL and
> the drivers team will retain approval responsibilities they now share.
> - The split would occur around Kilo-1 (subject to coordination of the
> Infra and Networking teams). The timing is designed to enable the proposed
> REST changes to land around the time of the December development sprint.
> - The core team for each repository will be determined and proposed by
> Kyle Mestery for approval by the current core team.
> - The Neutron Server and the Neutron Adv Services Library would be
> co-gated to ensure that incompatibilities are not introduced.
> - The Advance Service Library would be an optional dependency of Neutron,
> so integrated cross-project checks would not be required to enable it
> during testing.
> - The split should not adversely impact operators and the Networking
> program should maintain standard OpenStack compatibility and deprecation
> cycles.
>
> This proposal to divide into two repositories achieved a strong consensus
> at the recent Paris Design Summit and it does not conflict with the current
> governance model or any proposals circulating as part of the ‘Big Tent’
> discussion.
>
> Kyle and mark
>
> [1]
> https://git.openstack.org/cgit/openstack/governance/plain/reference/programs.yaml
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] L2 gateway as a service

2014-11-18 Thread Armando M.
Hi,

On 18 November 2014 16:22, Ian Wells  wrote:

> Sorry I'm a bit late to this, but that's what you get from being on
> holiday...  (Which is also why there are no new MTU and VLAN specs yet, but
> I swear I'll get to them.)
>

Ah! I hope it was good at least :)


>
> On 17 November 2014 01:13, Mathieu Rohon  wrote:
>
>> Hi
>>
>> On Fri, Nov 14, 2014 at 6:26 PM, Armando M.  wrote:
>> > Last Friday I recall we had two discussions around this topic. One in
>> the
>> > morning, which I think led to Maruti to push [1]. The way I understood
>> [1]
>> > was that it is an attempt at unifying [2] and [3], by choosing the API
>> > approach of one and the architectural approach of the other.
>> >
>> > [1] https://review.openstack.org/#/c/134179/
>> > [2] https://review.openstack.org/#/c/100278/
>> > [3] https://review.openstack.org/#/c/93613/
>> >
>> > Then there was another discussion in the afternoon, but I am not 100%
>> of the
>> > outcome.
>>
>> Me neither, that's why I'd like ian, who led this discussion, to sum
>> up the outcome from its point of view.
>>
>
> So, the gist of what I said is that we have three, independent, use cases:
>
> - connecting two VMs that like to tag packets to each other (VLAN clean
> networks)
> - connecting many networks to a single VM (trunking ports)
> - connecting the outside world to a set of virtual networks
>
> We're discussing that last use case here.  The point I was made was that:
>
> - there are more encaps in the world than just VLANs
> - they can all be solved in the same way using an edge API
>

No disagreement all the way up to this point, assumed that I don't worry
about what this edge API really is.


> - if they are solved using an edge API, the job of describing the network
> you're trying to bring in (be it switch/port/vlan, or MPLS label stack, or
> l2tpv3 endpoint data) is best kept outside of Neutron's API, because
> Neutron can't usefully do anything with it other than validate it and hand
> it off to whatever network control code is being used.  (Note that most
> encaps will likely *not* be implemented in Neutron's inbuilt control code.)
>

This is where the disagreement begins, as far as I am concerned; in fact we
already have a well defined way of describing what a network entity in
Neutron is, namely an L2 broadcast domain abstraction. An L2 gateway API
that is well defined and well scoped should just express how one can be
connected to another, nothing more, at least as a starting point.


>
> Now, the above argument says that we should keep this out of Neutron.  The
> problem with that is that people are using the OVS mechanism driver and
> would like a solution that works with that, implying something that's
> *inside* Neutron.  For that case, it's certainly valid to consider another
> means of implementation, but it wouldn't be my personal choice.  (For what
> it's worth I'm looking at ODL based controller implementations, so this
> isn't an issue for me personally.)
>
> If one were to implement the code in the Neutron API, even as an
> extension, I would question whether it's a sensible thing to attempt before
> the RPC server/REST server split is done, since it also extends the API
> between them.
>
> > All this churn makes me believe that we probably just need to stop
>> > pretending we can achieve any sort of consensus on the approach and let
>> the
>> > different alternatives develop independently, assumed they can all
>> develop
>> > independently, and then let natural evolution take its course :)
>>
>> I tend to agree, but I think that one of the reason why we are looking
>> for a consensus, is because API evolutions proposed through
>> Neutron-spec are rejected by core-dev, because they rely on external
>> components (sdn controller, proprietary hardware...) or they are not a
>> high priority for neutron core-dev.
>> By finding a consensus, we show that several players are interested in
>> such an API, and it helps to convince core-dev that this use-case, and
>> its API, is missing in neutron.
>>
>
> There are lots of players interested in an API, that much is clear, and
> all the more so if you consider that this feature has strong analogies with
> use cases such as switch port exposure and MPLS.  The problem is that it's
> clearly a fairly complex API with some variety of ways to implement it, and
> both of these things work against its acceptance.  Additionally, per the
> above discussion, I would say it's not essential for it to be c

Re: [openstack-dev] [neutron] L2 gateway as a service

2014-11-20 Thread Armando M.
Hi Sukhdev,

Hope you enjoyed Europe ;)

On 19 November 2014 17:19, Sukhdev Kapur  wrote:

> Folks,
>
> Like Ian, I am jumping in this very late as well - as I decided to travel
> Europe after the summit, just returned back and  catching up :-):-)
>
> I have noticed that this thread has gotten fairly convoluted and painful
> to read.
>
> I think Armando summed it up well in the beginning of the thread. There
> are basically three written proposals (listed in Armando's email - I pasted
> them again here).
>
> [1] https://review.openstack.org/#/c/134179/
> [2] https://review.openstack.org/#/c/100278/
> [3] https://review.openstack.org/#/c/93613/
>
> On this thread I see that the authors of first two proposals have already
> agreed to consolidate and work together. This leaves with two proposals.
> Both Ian and I were involved with the third proposal [3] and have
> reasonable idea about it. IMO, the use cases addressed by the third
> proposal are very similar to use cases addressed by proposal [1] and [2]. I
> can volunteer to  follow up with Racha and Stephen from Ericsson to see if
> their use case will be covered with the new combined proposal. If yes, we
> have one converged proposal. If no, then we modify the proposal to
> accommodate their use case as well. Regardless, I will ask them to review
> and post their comments on [1].
>
> Having said that, this covers what we discussed during the morning session
> on Friday in Paris. Now, comes the second part which Ian brought up in the
> afternoon session on Friday.
> My initial reaction was, when heard his use case, that this new
> proposal/API should cover that use case as well (I am being bit optimistic
> here :-)). If not, rather than going into the nitty gritty details of the
> use case, let's see what modification is required to the proposed API to
> accommodate Ian's use case and adjust it accordingly.
>
> Now, the last point (already brought up by Salvatore as well as Armando) -
> the abstraction of the API, so that it meets the Neutron API criteria. I
> think this is the critical piece. I also believe the API proposed by [1] is
> very close. We should clean it up and take out references to ToR's or
> physical vs virtual devices. The API should work at an abstract level so
> that it can deal with both physical as well virtual devices. If we can
> agree to that, I believe we can have a solid solution.
>

Yes, I do think that the same API can target both: a 100% software solution
for L2GW as well as one that may want to rely on hardware support, in the
same spirit of any other Neutron API. I made the same point on spec [1].


>
>
> Having said that I would like to request the community to review the
> proposal submitted by Maruti in [1] and post comments on the spec with the
> intent to get a closure on the API. I see lots of good comments already on
> the spec. Lets get this done so that we can have a workable (even if not
> perfect) version of API in Kilo cycle. Something which we can all start to
> play with. We can always iterate over it, and make change as we get more
> and more use cases covered.
>

So far it seems like proposal [1] that has the most momentum. I'd like to
consider [3] as one potential software implementation of the proposed API.
As I mentioned earlier, I'd rather start with a well defined problem, free
of any potential confusion or open to subjective interpretation; a loose
API suffers from both pitfalls, hence my suggestion to go with API proposed
in [1].


>
> Make sense?
>
> cheers..
> -Sukhdev
>
>
> On Tue, Nov 18, 2014 at 6:44 PM, Armando M.  wrote:
>
>> Hi,
>>
>> On 18 November 2014 16:22, Ian Wells  wrote:
>>
>>> Sorry I'm a bit late to this, but that's what you get from being on
>>> holiday...  (Which is also why there are no new MTU and VLAN specs yet, but
>>> I swear I'll get to them.)
>>>
>>
>> Ah! I hope it was good at least :)
>>
>>
>>>
>>> On 17 November 2014 01:13, Mathieu Rohon 
>>> wrote:
>>>
>>>> Hi
>>>>
>>>> On Fri, Nov 14, 2014 at 6:26 PM, Armando M.  wrote:
>>>> > Last Friday I recall we had two discussions around this topic. One in
>>>> the
>>>> > morning, which I think led to Maruti to push [1]. The way I
>>>> understood [1]
>>>> > was that it is an attempt at unifying [2] and [3], by choosing the API
>>>> > approach of one and the architectural approach of the other.
>>>> >
>>>> > [1] https://review.openstack.org/#/c/134179/
>>>> > [2] https://review.openstack.org/#/c/100278/
>>>&g

Re: [openstack-dev] [neutron] L2 gateway as a service

2014-11-20 Thread Armando M.
> "Iterate" is the key here I believe. As long as we pretend to achieve the
> perfect API at the first attempt we'll just keep having this discussion. I
> think the first time a L2 GW API was proposed, it was for Grizzly.
> For instance, it might relatively easy to define an API which can handle
> both physical and virtual devices. The user workflow for a ToR terminated
> L2 GW is different from the workflow for a virtual appliance owned by
> tenant, and this will obviously reflected in the API. On the other hand, a
> BGP VPN might be a completely different use case, and therefore have a
> completely different set of APIs.
>

+1


>
> Beyond APIs there are two more things to mention.
> First, we need some sort of open source reference implementation for every
> use case. For hardware VTEP obviously this won't be possible, but perhaps
> [1] can be used for integration tests.
>

I think, once the API settled there may be multiple implementations for
bridging logical nets with physical ones; one could be hardware vtep schema
implemented by a switch, one other could be the same schema implemented on
a white box, or something totally different. If we designed the API
correctly, that should not matter. I believe that [1] should clearly state
that. I'll make sure this point is captured.


> The complexity of providing this implementation might probably drive the
> roadmap for supporting L2 GW use cases.
> Second, I still believe this is an "advanced service" and therefore a
> candidate for being outside of neutron's main repo (which, if you're
> following the discussions does not mean "outside of neutron"). The
> arguments I've seen so far do not yet convince me this thing has to be
> tightly integrated into the core neutron.
>

My working assumption is that this is going to bake elsewhere outside the
core. However this will need integration hooks to the core the same way
other advanced services do, so it's of paramount that the vendor spin-off
goes ahead, so that efforts, like this one, can evolve at the pace they are
comfortable with.


>
>
> Salvatore
>
>
> [1] http://openvswitch.org/pipermail/dev/2013-October/032530.html
>
>
>
>
>>
>> Make sense?
>>
>> cheers..
>> -Sukhdev
>>
>>
>> On Tue, Nov 18, 2014 at 6:44 PM, Armando M.  wrote:
>>
>>> Hi,
>>>
>>> On 18 November 2014 16:22, Ian Wells  wrote:
>>>
>>>> Sorry I'm a bit late to this, but that's what you get from being on
>>>> holiday...  (Which is also why there are no new MTU and VLAN specs yet, but
>>>> I swear I'll get to them.)
>>>>
>>>
>>> Ah! I hope it was good at least :)
>>>
>>>
>>>>
>>>> On 17 November 2014 01:13, Mathieu Rohon 
>>>> wrote:
>>>>
>>>>> Hi
>>>>>
>>>>> On Fri, Nov 14, 2014 at 6:26 PM, Armando M.  wrote:
>>>>> > Last Friday I recall we had two discussions around this topic. One
>>>>> in the
>>>>> > morning, which I think led to Maruti to push [1]. The way I
>>>>> understood [1]
>>>>> > was that it is an attempt at unifying [2] and [3], by choosing the
>>>>> API
>>>>> > approach of one and the architectural approach of the other.
>>>>> >
>>>>> > [1] https://review.openstack.org/#/c/134179/
>>>>> > [2] https://review.openstack.org/#/c/100278/
>>>>> > [3] https://review.openstack.org/#/c/93613/
>>>>> >
>>>>> > Then there was another discussion in the afternoon, but I am not
>>>>> 100% of the
>>>>> > outcome.
>>>>>
>>>>> Me neither, that's why I'd like ian, who led this discussion, to sum
>>>>> up the outcome from its point of view.
>>>>>
>>>>
>>>> So, the gist of what I said is that we have three, independent, use
>>>> cases:
>>>>
>>>> - connecting two VMs that like to tag packets to each other (VLAN clean
>>>> networks)
>>>> - connecting many networks to a single VM (trunking ports)
>>>> - connecting the outside world to a set of virtual networks
>>>>
>>>> We're discussing that last use case here.  The point I was made was
>>>> that:
>>>>
>>>> - there are more encaps in the world than just VLANs
>>>> - they can all be solved in the same way using an edge API
>>>>
>>>
>>> No 

Re: [openstack-dev] [Neutron][L2 Agent][Debt] Bootstrapping an L2 agent debt repayment task force

2014-11-25 Thread Armando M.
Hi Henry,

Thanks for your input.


> No attention to argue on agent vs. agentless, built-in reference vs.
> external controller, Openstack is an open community. But, I just want
> to say that modularized agent re-factoring does make a lot of sense,
> while forcing customer to piggyback an extra SDN controller on their
> Cloud solution is not the only future direction of Neutron.
>

My main reference was with having the Kilo timeframe in mind. Once, we made
a clear demarcation between the various components, then they can develop
in isolation, which is very valuable IMO.
If enough critical mass gets behind an agent-based solution to control the
networking layer whichever it may be, I would be definitely okay with that,
but I don't think that is what Neutron should be concerned about, but
rather enabling these extra capabilities, the same way Nova does so with
hypervisors.

Cheers,
Armando
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - the setup of a DHCP sub-group

2014-11-26 Thread Armando M.
Hi Don,

You should look at this one:

https://wiki.openstack.org/wiki/NeutronSubteamCharters

Also, it would be good to start feeding the content of that gdoc into a
neutron-specs blueprint, using template [1] and process [2], bearing in
mind these dates [3]

1.
http://git.openstack.org/cgit/openstack/neutron-specs/tree/specs/template.rst
2. https://wiki.openstack.org/wiki/Blueprints
3. https://wiki.openstack.org/wiki/NeutronKiloProjectPlan

HTH
Armando


On 24 November 2014 at 14:27, Carl Baldwin  wrote:

> Don,
>
> Could the spec linked to your BP be moved to the specs repository?
> I'm hesitant to start reading it as a google doc when I know I'm going
> to want to make comments and ask questions.
>
> Carl
>
> On Thu, Nov 13, 2014 at 9:19 AM, Don Kehn  wrote:
> > If this shows up twice sorry for the repeat:
> >
> > Armando, Carl:
> > During the Summit, Armando and I had a very quick conversation concern a
> > blue print that I submitted,
> > https://blueprints.launchpad.net/neutron/+spec/dhcp-cpnr-integration and
> > Armando had mention the possibility of getting together a sub-group
> tasked
> > with DHCP Neutron concerns. I have talk with Infoblox folks (see
> > https://blueprints.launchpad.net/neutron/+spec/neutron-ipam), and
> everyone
> > seems to be in agreement that there is synergy especially concerning the
> > development of a relay and potentially looking into how DHCP is handled.
> In
> > addition during the Fridays meetup session on DHCP that I gave there
> seems
> > to be some general interest by some of the operators as well.
> >
> > So what would be the formality in going forth to start a sub-group and
> > getting this underway?
> >
> > DeKehn
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Changes to the core team

2014-12-02 Thread Armando M.
Congrats to Henry and Kevin, +1!
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Core/Vendor code decomposition

2014-12-05 Thread Armando M.
Hi folks,

For a few weeks now the Neutron team has worked tirelessly on [1].

This initiative stems from the fact that as the project matures, evolution
of processes and contribution guidelines need to evolve with it. This is to
ensure that the project can keep on thriving in order to meet the needs of
an ever growing community.

The effort of documenting intentions, and fleshing out the various details
of the proposal is about to reach an end, and we'll soon kick the tires to
put the proposal into practice. Since the spec has grown pretty big, I'll
try to capture the tl;dr below.

If you have any comment please do not hesitate to raise them here and/or
reach out to us.

tl;dr >>>

>From the Kilo release, we'll initiate a set of steps to change the
following areas:

   - Code structure: every plugin or driver that exists or wants to exist
   as part of Neutron project is decomposed in an slim vendor integration
   (which lives in the Neutron repo), plus a bulkier vendor library (which
   lives in an independent publicly available repo);
   - Contribution process: this extends to the following aspects:
  - Design and Development: the process is largely unchanged for the
  part that pertains the vendor integration; the maintainer team is fully
  auto governed for the design and development of the vendor library;
  - Testing and Continuous Integration: maintainers will be required to
  support their vendor integration with 3rd CI testing; the
requirements for
  3rd CI testing are largely unchanged;
  - Defect management: the process is largely unchanged, issues
  affecting the vendor library can be tracked with whichever
tool/process the
  maintainer see fit. In cases where vendor library fixes need to be
  reflected in the vendor integration, the usual OpenStack defect
management
  apply.
  - Documentation: there will be some changes to the way plugins and
  drivers are documented with the intention of promoting discoverability of
  the integrated solutions.
   - Adoption and transition plan: we strongly advise maintainers to stay
   abreast of the developments of this effort, as their code, their CI, etc
   will be affected. The core team will provide guidelines and support
   throughout this cycle the ensure a smooth transition.

To learn more, please refer to [1].

Many thanks,
Armando

[1] https://review.openstack.org/#/c/134680
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron documentation to update about new vendor plugin, but without code in repository?

2014-12-05 Thread Armando M.
For anyone who had an interest in following this thread, they might want to
have a look at [1], and [2] (which is the tl;dr version [1]).

HTH
Armando

[1] https://review.openstack.org/#/c/134680
[2]
http://lists.openstack.org/pipermail/openstack-dev/2014-December/052346.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Core/Vendor code decomposition

2014-12-09 Thread Armando M.

> Thanks for your attention and for reading through this
>
> Salvatore
>
> [1]
> http://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/vmware/plugin.py#n22
>
> On 8 December 2014 at 21:51, Maru Newby  wrote:
>
>>
>> On Dec 7, 2014, at 10:51 AM, Gary Kotton  wrote:
>>
>> > Hi Kyle,
>> > I am not missing the point. I understand the proposal. I just think
>> that it has some shortcomings (unless I misunderstand, which will certainly
>> not be the first time and most definitely not the last). The thinning out
>> is to have a shim in place. I understand this and this will be the entry
>> point for the plugin. I do not have a concern for this. My concern is that
>> we are not doing this with the ML2 off the bat. That should lead by example
>> as it is our reference architecture. Lets not kid anyone, but we are going
>> to hit some problems with the decomposition. I would prefer that it be done
>> with the default implementation. Why?
>>
>> The proposal is to move vendor-specific logic out of the tree to increase
>> vendor control over such code while decreasing load on reviewers.  ML2
>> doesn’t contain vendor-specific logic - that’s the province of ML2 drivers
>> - so it is not a good target for the proposed decomposition by itself.
>>
>>
>> >   • Cause we will fix them quicker as it is something that prevent
>> Neutron from moving forwards
>> >   • We will just need to fix in one place first and not in N (where
>> N is the vendor plugins)
>> >   • This is a community effort – so we will have a lot more eyes on
>> it
>> >   • It will provide a reference architecture for all new plugins
>> that want to be added to the tree
>> >   • It will provide a working example for plugin that are already
>> in tree and are to be replaced by the shim
>> > If we really want to do this, we can say freeze all development (which
>> is just approvals for patches) for a few days so that we will can just
>> focus on this. I stated what I think should be the process on the review.
>> For those who do not feel like finding the link:
>> >   • Create a stack forge project for ML2
>> >   • Create the shim in Neutron
>> >   • Update devstack for the to use the two repos and the shim
>> > When #3 is up and running we switch for that to be the gate. Then we
>> start a stopwatch on all other plugins.
>>
>> As was pointed out on the spec (see Miguel’s comment on r15), the ML2
>> plugin and the OVS mechanism driver need to remain in the main Neutron repo
>> for now.  Neutron gates on ML2+OVS and landing a breaking change in the
>> Neutron repo along with its corresponding fix to a separate ML2 repo would
>> be all but impossible under the current integrated gating scheme.
>> Plugins/drivers that do not gate Neutron have no such constraint.
>>
>>
>> Maru
>>
>>
>> > Sure, I’ll catch you on IRC tomorrow. I guess that you guys will bash
>> out the details at the meetup. Sadly I will not be able to attend – so you
>> will have to delay on the tar and feathers.
>> > Thanks
>> > Gary
>> >
>> >
>> > From: "mest...@mestery.com" 
>> > Reply-To: OpenStack List 
>> > Date: Sunday, December 7, 2014 at 7:19 PM
>> > To: OpenStack List 
>> > Cc: "openst...@lists.openstack.org" 
>> > Subject: Re: [openstack-dev] [Neutron] Core/Vendor code decomposition
>> >
>> > Gary, you are still miss the point of this proposal. Please see my
>> comments in review. We are not forcing things out of tree, we are thinning
>> them. The text you quoted in the review makes that clear. We will look at
>> further decomposing ML2 post Kilo, but we have to be realistic with what we
>> can accomplish during Kilo.
>> >
>> > Find me on IRC Monday morning and we can discuss further if you still
>> have questions and concerns.
>> >
>> > Thanks!
>> > Kyle
>> >
>> > On Sun, Dec 7, 2014 at 2:08 AM, Gary Kotton  wrote:
>> >> Hi,
>> >> I have raised my concerns on the proposal. I think that all plugins
>> should be treated on an equal footing. My main concern is having the ML2
>> plugin in tree whilst the others will be moved out of tree will be
>> problematic. I think that the model will be complete if the ML2 was also
>> out of tree. This will help crystalize the idea and make sure that the
>> model works correctly.
>> >> Thanks
>> >> Gary
>> >>
>> >>

Re: [openstack-dev] [Neutron] Core/Vendor code decomposition

2014-12-09 Thread Armando M.
2-driver/blob/icehouse/test-requirements.txt
> [3]
> https://github.com/noironetworks/apic-ml2-driver/blob/icehouse/apic_ml2/neutron/db/migration/alembic_migrations/env.py
> [4]
> https://github.com/noironetworks/apic-ml2-driver/blob/icehouse/setup.cfg
> [5] https://github.com/openstack-dev/cookiecutter
> [6] https://github.com/stackforge/group-based-policy
>
> On Tue, Dec 9, 2014 at 9:53 AM, Salvatore Orlando 
> wrote:
>
>>
>>
>> On 9 December 2014 at 17:32, Armando M.  wrote:
>>
>>>
>>>
>>> On 9 December 2014 at 09:41, Salvatore Orlando 
>>> wrote:
>>>
>>>> I would like to chime into this discussion wearing my plugin developer
>>>> hat.
>>>>
>>>> We (the VMware team) have looked very carefully at the current proposal
>>>> for splitting off drivers and plugins from the main source code tree.
>>>> Therefore the concerns you've heard from Gary are not just ramblings but
>>>> are the results of careful examination of this proposal.
>>>>
>>>> While we agree with the final goal, the feeling is that for many plugin
>>>> maintainers this process change might be too much for what can be
>>>> accomplished in a single release cycle.
>>>>
>>> We actually gave a lot more than a cycle:
>>>
>>>
>>> https://review.openstack.org/#/c/134680/16/specs/kilo/core-vendor-decomposition.rst
>>> LINE 416
>>>
>>> And in all honestly, I can only tell that getting this done by such an
>>> experienced team like the Neutron team @VMware shouldn't take that long.
>>>
>>
>> We are probably not experienced enough. We always love to learn new
>> things.
>>
>>
>>>
>>> By the way, if Kyle can do it in his teeny tiny time that he has left
>>> after his PTL duties, then anyone can do it! :)
>>>
>>> https://review.openstack.org/#/c/140191/
>>>
>>
>> I think I should be able to use mv & git push as well - I think however
>> there's a bit more than that to it.
>>
>>
>>>
>>> As a member of the drivers team, I am still very supportive of the
>>>> split, I just want to make sure that it’s made in a sustainable way; I also
>>>> understand that “sustainability” has been one of the requirements of the
>>>> current proposal, and therefore we should all be on the same page on this
>>>> aspect.
>>>>
>>>> However, we did a simple exercise trying to assess the amount of work
>>>> needed to achieve something which might be acceptable to satisfy the
>>>> process. Without going into too many details, this requires efforts for:
>>>>
>>>> - refactor the code to achieve a plugin module simple and thin enough
>>>> to satisfy the requirements. Unfortunately a radical approach like the one
>>>> in [1] with a reference to an external library is not pursuable for us
>>>>
>>>> - maintaining code repositories outside of the neutron scope and the
>>>> necessary infrastructure
>>>>
>>>> - reinforcing our CI infrastructure, and improve our error detection
>>>> and log analysis capabilities to improve reaction times upon failures
>>>> triggered by upstream changes. As you know, even if the plugin interface is
>>>> solid-ish, the dependency on the db base class increases the chances of
>>>> upstream changes breaking 3rd party plugins.
>>>>
>>>
>>> No-one is advocating for approach laid out in [1], but a lot of code can
>>> be moved elsewhere (like the nsxlib) without too much effort. Don't forget
>>> that not so long ago I was the maintainer of this plugin and the one who
>>> built the VMware NSX CI; I know very well what it takes to scope this
>>> effort, and I can support you in the process.
>>>
>>
>> Thanks for this clarification. I was sure that you guys were not
>> advocating for a ninja-split thing, but I wanted just to be sure of that.
>> I'm also pretty sure our engineering team values your support.
>>
>>> The feedback from our engineering team is that satisfying the
>>>> requirements of this new process might not be feasible in the Kilo
>>>> timeframe, both for existing plugins and for new plugins and drivers that
>>>> should be upstreamed (there are a few proposed on neutron-specs at the
>>>> moment, which are all in -2 status considering the impending approval of
>>>> the s

Re: [openstack-dev] [neutron] Vendor Plugin Decomposition and NeutronClient vendor extension

2014-12-12 Thread Armando M.
On 12 December 2014 at 22:18, Ryu Ishimoto  wrote:
>
>
> Hi All,
>
> It's great to see the vendor plugin decomposition spec[1] finally getting
> merged!  Now that the spec is completed, I have a question on how this may
> impact neutronclient, and in particular, its handling of vendor extensions.
>

Thanks for the excitement :)


>
> One of the great things about splitting out the plugins is that it will
> allow vendors to implement vendor extensions more rapidly.  Looking at the
> neutronclient code, however, it seems that these vendor extension commands
> are embedded inside the project, and doesn't seem easily extensible.  It
> feels natural that, now that neutron vendor code is split out,
> neutronclient should also do the same.
>
> Of course, you could always fork neutronclient yourself, but I'm wondering
> if there is any plan on improving this.  Admittedly, I don't have a great
> solution myself but I'm thinking something along the line of allowing
> neutronclient to load commands from an external directory.  I am not
> familiar enough with neutronclient to know if there are technical
> limitation to what I'm suggesting, but I would love to hear thoughts of
> others on this.
>

There is quite a bit of road ahead of us. We haven't thought or yet
considered how to handle extensions client side. Server side, the extension
mechanism is already quite flexible, but we gotta learn to walk before we
can run!

Having said that your points are well taken, but most likely we won't be
making much progress on these until we have provided and guaranteed a
smooth transition for all plugins and drivers as suggested by the spec
referenced below. Stay tuned!

Cheers,
Armando


>
> Thanks in advance!
>
> Best,
> Ryu
>
> [1] https://review.openstack.org/#/c/134680/
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Core/Vendor code decomposition

2014-12-12 Thread Armando M.
On 12 December 2014 at 23:01, Yuriy Shovkoplias 
wrote:
>
> Dear neutron community,
>
> Can you please clarify couple points on the vendor code decomposition?
>  - Assuming I would like to create the new driver now (Kilo development
> cycle) - is it already allowed (or mandatory) to follow the new process?
>
> https://review.openstack.org/#/c/134680/
>
>
Yes. See [1] for more details.


> - Assuming the new process is already in place, are the following
> guidelines still applicable for the vendor integration code (not for vendor
> library)?
>
> https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers
> The following is a list of requirements for inclusion of code upstream:
>
>- Participation in Neutron meetings, IRC channels, and email lists.
>- A member of the plugin/driver team participating in code reviews of
>other upstream code.
>
>
I see no reason why you wouldn't follow those guidelines, as a general rule
of thumb. Having said that, some of the wording would need to be tweaked to
take into account of the new contribution model. Bear in mind, that I
started adding some developer documentation in [2], to give a practical
guide to the proposal. More to follow.

Cheers,
Armando

[1]
http://docs-draft.openstack.org/80/134680/17/check/gate-neutron-specs-docs/2a7afdd/doc/build/html/specs/kilo/core-vendor-decomposition.html#adoption-and-deprecation-policy
[2]
https://review.openstack.org/#/q/status:open+project:openstack/neutron+branch:master+topic:bp/core-vendor-decomposition,n,z


> Regards,
> Yuri
>
> On Thu, Dec 11, 2014 at 3:23 AM, Gary Kotton  wrote:
>>
>>
>> On 12/11/14, 12:50 PM, "Ihar Hrachyshka"  wrote:
>>
>> >-BEGIN PGP SIGNED MESSAGE-
>> >Hash: SHA512
>> >
>> >+100. I vote -1 there and would like to point out that we *must* keep
>> >history during the split, and split from u/s code base, not random
>> >repositories. If you don't know how to achieve this, ask oslo people,
>> >they did it plenty of times when graduating libraries from
>> oslo-incubator.
>> >/Ihar
>> >
>> >On 10/12/14 19:18, Cedric OLLIVIER wrote:
>> >> <https://review.openstack.org/#/c/140191/>
>> >>
>> >> 2014-12-09 18:32 GMT+01:00 Armando M. > >> <mailto:arma...@gmail.com>>:
>> >>
>> >>
>> >> By the way, if Kyle can do it in his teeny tiny time that he has
>> >> left after his PTL duties, then anyone can do it! :)
>> >>
>> >> https://review.openstack.org/#/c/140191/
>>
>> This patch looses the recent hacking changes that we have made. This is a
>> slight example to try and highly the problem that we may incur as a
>> community.
>>
>> >>
>> >> Fully cloning Dave Tucker's repository [1] and the outdated fork of
>> >> the ODL ML2 MechanismDriver included raises some questions (e.g.
>> >> [2]). I wish the next patch set removes some files. At least it
>> >> should take the mainstream work into account (e.g. [3]) .
>> >>
>> >> [1] https://github.com/dave-tucker/odl-neutron-drivers [2]
>> >> https://review.openstack.org/#/c/113330/ [3]
>> >> https://review.openstack.org/#/c/96459/
>> >>
>> >>
>> >> ___ OpenStack-dev
>> >> mailing list OpenStack-dev@lists.openstack.org
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>> >-BEGIN PGP SIGNATURE-
>> >Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
>> >
>> >iQEcBAEBCgAGBQJUiXcIAAoJEC5aWaUY1u57dBMH/17unffokpb0uxqewPYrPNMI
>> >ukDzG4dW8mIP3yfbVNsHQXe6gWj/kj/SkBWJrO13BusTu8hrr+DmOmmfF/42s3vY
>> >E+6EppQDoUjR+QINBwE46nU+E1w9hIHyAZYbSBtaZQ32c8aQbmHmF+rgoeEQq349
>> >PfpPLRI6MamFWRQMXSgF11VBTg8vbz21PXnN3KbHbUgzI/RS2SELv4SWmPgKZCEl
>> >l1K5J1/Vnz2roJn4pr/cfc7vnUIeAB5a9AuBHC6o+6Je2RDy79n+oBodC27kmmIx
>> >lVGdypoxZ9tF3yfRM9nngjkOtozNzZzaceH0Sc/5JR4uvNReVN4exzkX5fDH+SM=
>> >=dfe/
>> >-END PGP SIGNATURE-
>> >
>> >___
>> >OpenStack-dev mailing list
>> >OpenStack-dev@lists.openstack.org
>> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Services are now split out and neutron is open for commits!

2014-12-13 Thread Armando M.
This was more of a brute force fix!

I didn't have time to go with finesse, and instead I went in with the
hammer :)

That said, we want to make sure that the upgrade path to Kilo is as
painless as possible, so we'll need to review the Release Notes [1] to
reflect the fact that we'll be providing a seamless migration to the new
adv services structure.

[1] https://wiki.openstack.org/wiki/ReleaseNotes/Kilo#Upgrade_Notes_6


Cheers,
Armando

On 12 December 2014 at 09:33, Kyle Mestery  wrote:
>
> This has merged now, FYI.
>
> On Fri, Dec 12, 2014 at 10:28 AM, Doug Wiegley 
> wrote:
>
>>  Hi all,
>>
>>  Neutron grenade jobs have been failing since late afternoon Thursday,
>> due to split fallout.  Armando has a fix, and it’s working it’s way through
>> the gate:
>>
>>  https://review.openstack.org/#/c/141256/
>>
>>  Get your rechecks ready!
>>
>>  Thanks,
>> Doug
>>
>>
>>   From: Douglas Wiegley 
>> Date: Wednesday, December 10, 2014 at 10:29 PM
>> To: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org>
>> Subject: Re: [openstack-dev] [neutron] Services are now split out and
>> neutron is open for commits!
>>
>>   Hi all,
>>
>>  I’d like to echo the thanks to all involved, and thanks for the
>> patience during this period of transition.
>>
>>  And a logistical note: if you have any outstanding reviews against the
>> now missing files/directories (db/{loadbalancer,firewall,vpn}, services/,
>> or tests/unit/services), you must re-submit your review against the new
>> repos.  Existing neutron reviews for service code will be summarily
>> abandoned in the near future.
>>
>>  Lbaas folks, hold off on re-submitting feature/lbaasv2 reviews.  I’ll
>> have that branch merged in the morning, and ping in channel when it’s ready
>> for submissions.
>>
>>  Finally, if any tempest lovers want to take a crack at splitting the
>> tempest runs into four, perhaps using salv’s reviews of splitting them in
>> two as a guide, and then creating jenkins jobs, we need some help getting
>> those going.  Please ping me directly (IRC: dougwig).
>>
>>  Thanks,
>> doug
>>
>>
>>   From: Kyle Mestery 
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Date: Wednesday, December 10, 2014 at 4:10 PM
>> To: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org>
>> Subject: [openstack-dev] [neutron] Services are now split out and
>> neutron is open for commits!
>>
>>   Folks, just a heads up that we have completed splitting out the
>> services (FWaaS, LBaaS, and VPNaaS) into separate repositores. [1] [2] [3].
>> This was all done in accordance with the spec approved here [4]. Thanks to
>> all involved, but a special thanks to Doug and Anita, as well as infra.
>> Without all of their work and help, this wouldn't have been possible!
>>
>> Neutron and the services repositories are now open for merges again.
>> We're going to be landing some major L3 agent refactoring across the 4
>> repositories in the next four days, look for Carl to be leading that work
>> with the L3 team.
>>
>>  In the meantime, please report any issues you have in launchpad [5] as
>> bugs, and find people in #openstack-neutron or send an email. We've
>> verified things come up and all the tempest and API tests for basic neutron
>> work fine.
>>
>> In the coming week, we'll be getting all the tests working for the
>> services repositories. Medium term, we need to also move all the advanced
>> services tempest tests out of tempest and into the respective repositories.
>> We also need to beef these tests up considerably, so if you want to help
>> out on a critical project for Neutron, please let me know.
>>
>> Thanks!
>> Kyle
>>
>> [1] http://git.openstack.org/cgit/openstack/neutron-fwaas
>> [2] http://git.openstack.org/cgit/openstack/neutron-lbaas
>> [3] http://git.openstack.org/cgit/openstack/neutron-vpnaas
>> [4]
>> http://git.openstack.org/cgit/openstack/neutron-specs/tree/specs/kilo/services-split.rst
>> [5] https://bugs.launchpad.net/neutron
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Minimal ML2 mechanism driver after Neutron decomposition change

2014-12-15 Thread Armando M.
On 15 December 2014 at 09:53, Neil Jerram 
wrote:
>
> Hi all,
>
> Following the approval for Neutron vendor code decomposition
> (https://review.openstack.org/#/c/134680/), I just wanted to comment
> that it appears to work fine to have an ML2 mechanism driver _entirely_
> out of tree, so long as the vendor repository that provides the ML2
> mechanism driver does something like this to register their driver as a
> neutron.ml2.mechanism_drivers entry point:
>
>   setuptools.setup(
>   ...,
>   entry_points = {
>   ...,
>   'neutron.ml2.mechanism_drivers': [
>   'calico = xyz.openstack.mech_xyz:XyzMechanismDriver',
>   ],
>   },
>   )
>
> (Please see
>
> https://github.com/Metaswitch/calico/commit/488dcd8a51d7c6a1a2f03789001c2139b16de85c
> for the complete change and detail, for the example that works for me.)
>
> Then Neutron and the vendor package can be separately installed, and the
> vendor's driver name configured in ml2_conf.ini, and everything works.
>
> Given that, I wonder:
>
> - is that what the architects of the decomposition are expecting?


> - other than for the reference OVS driver, are there any reasons in
>   principle for keeping _any_ ML2 mechanism driver code in tree?
>

The approach you outlined is reasonable, and new plugins/drivers, like
yours, may find it easier to approach Neutron integration this way.
However, to ensure a smoother migration path for existing plugins and
drivers, it was deemed more sensible to go down the path being proposed in
the spec referenced above.


>
> Many thanks,
>  Neil
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Minimal ML2 mechanism driver after Neutron decomposition change

2014-12-15 Thread Armando M.
>
>
>
> Good questions. I'm also looking for the linux bridge MD, SRIOV MD...
> Who will be responsible for these drivers?
>
> Excellent question. In my opinion, 'technology' specific but not vendor
> specific MD (like SRIOV) should not be maintained by specific vendor. It
> should be accessible for all interested parties for contribution.
>

I don't think that anyone is making the suggestion of making these drivers
develop in silos, but instead one of the objective is to allow them to
evolve more rapidly, and in the open, where anyone can participate.


>
> The OVS driver is maintained by Neutron community, vendor specific
> hardware driver by vendor, SDN controllers driver by their own community or
> vendor. But there are also other drivers like SRIOV, which are general for
> a lot of vendor agonitsc backends, and can't be maintained by a certain
> vendor/community.
>

Certain technologies, like the ones mentioned above may require specific
hardware; even though they may not be particularly associated with a
specific vendor, some sort of vendor support is indeed required, like 3rd
party CI. So, grouping them together under an hw-accelerated umbrella, or
whichever other name that sticks, may make sense long term should the
number of drivers really ramp up as hinted below.


>
> So, it would be better to keep some "general backend" MD in tree besides
> SRIOV. There are also vif-type-tap, vif-type-vhostuser,
> hierarchy-binding-external-VTEP ... We can implement a very thin in-tree
> base MD that only handle "vif bind" which is backend agonitsc, then backend
> provider is free to implement their own service logic, either by an backend
> agent, or by a driver derived from the base MD for agentless scenery.
>
> Keeping general backend MDs in tree sounds reasonable.
> Regards
>
> > Many thanks,
> >  Neil
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The state of nova-network to neutron migration

2015-01-09 Thread Armando M.
>
> If we were standing at a place with a detailed manual upgrade document
> that explained how to do minimal VM downtime, that a few ops had gone
> through and proved out, that would be one thing. And we could figure out
> which parts made sense to put tooling around to make this easier for
> everyone.
>
> But we seem far from there.
>
> My suggestion is to start with a detailed document, figure out that it
> works, and build automation around that process.
>

The problem is that whatever documented solution we can come up with is
going to be so opinionated to be hardly of any use on general terms, let
alone worth automating. Furthermore, its lifespan is going to be reasonably
limited which to me doesn't seem to justify enough the engineering cost,
and it's not like we haven't been trying...

I am not suggesting we give up entirely, but perhaps we should look at the
operator cases (for those who cannot afford cold migrations, or more simply
stand up a new cloud to run side-by-side with old cloud, and leave the old
one running until it drains), individually. This means having someone
technical who has a deep insight into these operator's environments lead
the development effort required to adjust the open source components to
accommodate whatever migration process makes sense to them. Having someone
championing a general effort from the 'outside' does not sound like an
efficient use of anyone's time.

So this goes back to the question: who can effectively lead the technical
effort? I personally don't think we can have Neutron cores or Nova cores
lead this effort and be effective, if they don't have direct
access/knowledge to these cloud platforms, and everything that pertains to
them.

Armando


> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Changes to the core team

2015-01-15 Thread Armando M.
+1

On 15 January 2015 at 14:46, Edgar Magana  wrote:

>  +1 For adding Doug as Core in Neutron!
>
>  I have seen his work on the services part and he is a great member of
> the OpenStack community!
>
>  Edgar
>
>   From: Kyle Mestery 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Thursday, January 15, 2015 at 2:31 PM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: [openstack-dev] [neutron] Changes to the core team
>
>The last time we looked at core reviewer stats was in December [1]. In
> looking at the current stats, I'm going to propose some changes to the core
> team. Reviews are the most important part of being a core reviewer, so we
> need to ensure cores are doing reviews. The stats for the 90 day period [2]
> indicate some changes are needed for core reviewers who are no longer
> reviewing on pace with the other core reviewers.
>
>  First of all, I'm removing Sumit Naiksatam from neutron-core. Sumit has
> been a core reviewer for a long time, and his past contributions are very
> much thanked by the entire OpenStack Neutron team. If Sumit jumps back in
> with thoughtful reviews in the future, we can look at getting him back as a
> Neutron core reviewer. But for now, his stats indicate he's not reviewing
> at a level consistent with the rest of the Neutron core reviewers.
>
>  As part of the change, I'd like to propose Doug Wiegley as a new Neutron
> core reviewer. Doug has been actively reviewing code across not only all
> the Neutron projects, but also other projects such as infra. His help and
> work in the services split in December were the reason we were so
> successful in making that happen. Doug has also been instrumental in the
> Neutron LBaaS V2 rollout, as well as helping to merge code in the other
> neutron service repositories.
>
> I'd also like to take this time to remind everyone that reviewing code is
> a responsibility, in Neutron the same as other projects. And core reviewers
> are especially beholden to this responsibility. I'd also like to point out
> that +1/-1 reviews are very useful, and I encourage everyone to continue
> reviewing code even if you are not a core reviewer.
>
> Existing neutron cores, please vote +1/-1 for the addition of Doug to the
> core team.
>
> Thanks!
> Kyle
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2014-December/051986.html
> [2] http://russellbryant.net/openstack-stats/neutron-reviewers-90.txt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] explanations on the current state of config file handling

2014-05-04 Thread Armando M.
If the consensus is to unify all the config options into a single
configuration file, I'd suggest following what the Nova folks did with
[1], which I think is what Salvatore was also hinted. This will also
help mitigate needless source code conflicts that would inevitably
arise when merging competing changes to the same file.

I personally do not like having a single file with gazillion options
(the same way I hate source files with gazillion LOC's but I digress
;), but I don't like a proliferation of config files either. So I
think what Mark suggested below makes sense.

Cheers,
Armando

[1] - 
https://github.com/openstack/nova/blob/master/etc/nova/README-nova.conf.txt

On 2 May 2014 07:09, Mark McClain  wrote:
>
> On May 2, 2014, at 7:39 AM, Sean Dague  wrote:
>
>> Some non insignificant number of devstack changes related to neutron
>> seem to be neutron plugins having to do all kinds of manipulation of
>> extra config files. The grenade upgrade issue in neutron was because of
>> some placement change on config files. Neutron seems to have *a ton* of
>> config files and is extremely sensitive to their locations/naming, which
>> also seems like it ends up in flux.
>
> We have grown in the number of configuration files and I do think some of the 
> design decisions made several years ago should probably be revisited.  One of 
> the drivers of multiple configuration files is the way that Neutron is 
> currently packaged [1][2].  We’re packaged significantly different than the 
> other projects so the thinking in the early years was that each 
> plugin/service since it was packaged separately needed its own config file.  
> This causes problems because often it involves changing the init script 
> invocation if the plugin is changed vs only changing the contents of the init 
> script.  I’d like to see Neutron changed to be a single package similar to 
> the way Cinder is packaged with the default config being ML2.
>
>>
>> Is there an overview somewhere to explain this design point?
>
> Sadly no.  It’s a historical convention that needs to be reconsidered.
>
>>
>> All the other services have a single config config file designation on
>> startup, but neutron services seem to need a bunch of config files
>> correct on the cli to function (see this process list from recent
>> grenade run - http://paste.openstack.org/show/78430/ note you will have
>> to horiz scroll for some of the neutron services).
>>
>> Mostly it would be good to understand this design point, and if it could
>> be evolved back to the OpenStack norm of a single config file for the
>> services.
>>
>
> +1 to evolving into a more limited set of files.  The trick is how we 
> consolidate the agent, server, plugin and/or driver options or maybe we don’t 
> consolidate and use config-dir more.  In some cases, the files share a set of 
> common options and in other cases there are divergent options [3][4].   
> Outside of testing the agents are not installed on the same system as the 
> server, so we need to ensure that the agent configuration files should stand 
> alone.
>
> To throw something out, what if moved to using config-dir for optional 
> configs since it would still support plugin scoped configuration files.
>
> Neutron Servers/Network Nodes
> /etc/neutron.d
> neutron.conf  (Common Options)
> server.d (all plugin/service config files )
> service.d (all service config files)
>
>
> Hypervisor Agents
> /etc/neutron
> neutron.conf
> agent.d (Individual agent config files)
>
>
> The invocations would then be static:
>
> neutron-server —config-file /etc/neutron/neutron.conf —config-dir 
> /etc/neutron/server.d
>
> Service Agents:
> neutron-l3-agent —config-file /etc/neutron/neutron.conf —config-dir 
> /etc/neutron/service.d
>
> Hypervisors (assuming the consolidates L2 is finished this cycle):
> neutron-l2-agent —config-file /etc/neutron/neutron.conf —config-dir 
> /etc/neutron/agent.d
>
> Thoughts?
>
> mark
>
> [1] http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/epel-7/
> [2] 
> http://packages.ubuntu.com/search?keywords=neutron&searchon=names&suite=trusty§ion=all
> [3] 
> https://git.openstack.org/cgit/openstack/neutron/tree/etc/neutron/plugins/nuage/nuage_plugin.ini#n2
> [4]https://git.openstack.org/cgit/openstack/neutron/tree/etc/neutron/plugins/bigswitch/restproxy.ini#n3
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Add VMware dvSwitch/vSphere API support for Neutron ML2

2014-05-06 Thread Armando M.
Hi Ilkka,

As Mathieu suggested there is a blueprint submission and revision
process put in place since the Juno release. Also, since Icehouse, to
incorporate a new plugin/mechanism driver into the Neutron source
tree, and to be designated as compatible, such a plugin/driver must be
accompanied by external third party CI testing (more details in [1]).

This means that, once the blueprint work has been approved, the code
must be submitted through the same review process adopted for the
blueprint, as detailed in [2], and accompanied by validation through
third party CI.

This sounds like a lot of work, but it is aimed at ensuring all the
usual ilities of the software being part of the official source tree.

That said, you are not alone and you can tap into the usual channels,
like the mailing list or IRC ([3]). If there is anything vmware
specific that you would like to address, we are here to help, so feel
free to direct your questions to #openstack-vmware.

Keep up the good work!

Cheers,
Armando

[1] - https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers
[2] - https://wiki.openstack.org/wiki/Gerrit_Workflow
[3] - https://wiki.openstack.org/wiki/IRC

On 6 May 2014 01:17, Mathieu Rohon  wrote:
> Hi IIkka,
>
> this is a very interesting MD for ML2. Have you ever tried to use your
> ML2 driver with VMWare drivers on the nova side, so that you could
> manage your VM with nova, and its network with neutron.
> Do you think it would be difficult to extend your driver to support
> vxlan encapsulation?
>
> Neutron has a new process to validate BP. Please follow those
> instructions to submit your spec for review :
> https://wiki.openstack.org/wiki/Blueprints#Neutron
>
> regards
>
> On Mon, May 5, 2014 at 2:22 PM, Ilkka Tengvall
>  wrote:
>> Hi,
>>
>> I would like to start a discussion about a ML2 driver for VMware distributed
>> virtual switch (dvSwitch) for Neutron. There is a new blueprint made by Sami
>> Mäkinen (sjm) in
>> https://blueprints.launchpad.net/neutron/+spec/neutron-ml2-mech-vmware-dvswitch.
>>
>> The driver is described and code is publicly available and hosted in github:
>> https://blueprints.launchpad.net/neutron/+spec/neutron-ml2-mech-vmware-dvswitch
>>
>> We would like to get the driver through contribution process, what ever that
>> exactly means :)
>>
>> The original problem this driver solves for is is the following:
>>
>> We've been running VMware virtualization platform in our data center before
>> OpenStack, and we will keep doing it due existing services. We also have
>> been running OpenStack for a while also. Now we wanted to get the most out
>> of both by combining the customers networks on the both plafroms by using
>> provider networks. The problem is that the networks need two separate
>> managers, neutron and vmware. There was no OpenStack tools to attach the
>> guests on VMware side to OpenStack provider networks during instance
>> creation.
>>
>> Now we are putting our VMware under control of OpenStack. We want to have
>> one master to control the networks, Neutron. We implemented the new ML2
>> driver to do just that. It is capable of joining the machines created in
>> vSphere to the same provider networks the OpenStack uses, using dvSwitch
>> port groups.
>>
>>
>> I just wanted to open the discussion, for the technical details please
>> contact our experts on the CC list:
>>
>> Sami J. Mäkinen
>> Jussi Sorjonen (freenode: mieleton)
>>
>>
>> BR,
>>
>> Ilkka Tengvall
>>  Advisory Consultant, Cloud Architecture
>>  email:  ilkka.tengv...@cybercom.com
>>  mobile: +358408443462
>>  freenode: ikke-t
>>  web:http://cybercom.com - http://cybercom.fi
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Proposed changes to core team

2014-05-21 Thread Armando M.
+1 from me too: Carl's contributions, code and reviews, have helped raise
the quality of this project.

Cheers,
Armando

On 21 May 2014 15:05, Maru Newby  wrote:
>
> On May 21, 2014, at 1:59 PM, Kyle Mestery  wrote:
>
>> Neutron cores, please vote +1/-1 for the proposed addition of Carl
>> Baldwin to Neutron core.
>
> +1 from me
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][group-based-policy] Should we revisit the priority of group-based policy?

2014-05-22 Thread Armando M.
I would second Maru's concerns, and I would also like to add the following:

We need to acknowledge the fact that there are certain architectural
aspects of Neutron as a project that need to be addressed; at the
summit we talked about the core refactoring, a task oriented API, etc.
To me these items have been neglected far too much over the past and
would need a higher priority and a lot more attention during the Juno
cycle. Being stretched as we are I wonder if dev/review cycles
wouldn't be better spent devoting more time to these efforts rather
than GP.

That said, I appreciate that GP is important and needs to move
forward, but at the same time I am thinking that there must be a
better way for addressing it and yet relieve some of the pressure that
GP complexity imposes to the Neutron team. One aspect it was discussed
at the summit was that the type of approach shown in [2] and [3]
below, was chosen because of lack of proper integration hooks...so I
am advocating: let's talk about those first before ruling them out in
favor of a monolithic approach that seems to violate some engineering
principles, like modularity and loose decoupling of system components.

I think we didn't have enough time during the summit to iron out some
of the concerns voiced here, and it seems like the IRC meeting for
Group Policy would not be the right venue to try and establish a
common ground among the people driving this effort and the rest of the
core team.

Shall we try and have an ad-hoc meeting and an ad-hoc agenda to find a
consensus?

Many thanks,
Armando

On 22 May 2014 11:38, Maru Newby  wrote:
>
> On May 22, 2014, at 11:03 AM, Maru Newby  wrote:
>
>> At the summit session last week for group-based policy, there were many 
>> concerns voiced about the approach being undertaken.  I think those concerns 
>> deserve a wider audience, and I'm going to highlight some of them here.
>>
>> The primary concern seemed to be related to the complexity of the approach 
>> implemented for the POC.  A number of session participants voiced concern 
>> that the simpler approach documented in the original proposal [1] (described 
>> in the section titled 'Policies applied between groups') had not been 
>> implemented in addition to or instead of what appeared in the POC (described 
>> in the section titled 'Policies applied as a group API').  The simpler 
>> approach was considered by those participants as having the advantage of 
>> clarity and immediate usefulness, whereas the complex approach was deemed 
>> hard to understand and without immediate utility.
>>
>> A secondary but no less important concern is related to the impact on 
>> Neutron of the approach implemented in the POC.  The POC was developed 
>> monolithically, without oversight through gerrit, and the resulting patches 
>> were excessive in size (~4700 [2] and ~1500 [3] lines).  Such large patches 
>> are effectively impossible to review.  Even broken down into reviewable 
>> chunks, though, it does not seem realistic to target juno-1 for merging this 
>> kind of complexity.  The impact on stability could be considerable, and it 
>> is questionable whether the necessary review effort should be devoted to 
>> fast-tracking group-based policy at all, let alone an approach that is 
>> considered by many to be unnecessarily complicated.
>>
>> The blueprint for group policy [4] is currently listed as a 'High' priority. 
>>  With the above concerns in mind, does it make sense to continue 
>> prioritizing an effort that at present would seem to require considerably 
>> more resources than the benefit it appears to promise?
>>
>>
>> Maru
>>
>> 1: https://etherpad.openstack.org/p/group-based-policy
>
> Apologies, this link is to the summit session etherpad.  The link to the 
> original proposal is:
>
> https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit
>
>> 2: https://review.openstack.org/93853
>> 3: https://review.openstack.org/93935
>> 4: 
>> https://blueprints.launchpad.net/neutron/+spec/group-based-policy-abstraction
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][group-based-policy] Should we revisit the priority of group-based policy?

2014-05-22 Thread Armando M.
 on that review and
> evaluation all the proposals in the document that you refer. It is easy to
> make general comments, but unless you participate in the process and sign up
> to writing the code, those comments are not going to help with solving the
> original problem. And this _is_ open-source. If you disagree, please write
> code and the community can decide for itself as to what model is actually
> simple to use for them. Curtailing efforts from other developers just
> because their engineering trade-offs are different from what you believe
> your use-case needs is not why we like open source. We enjoy the mode where
> different developers try different things, we experiment, and the software
> evolves to what the user demands. Or maybe, multiple models live in harmony.
> Let the users decide that.
>
> 3. Re: Could dev/review cycles be better spent on refactoring
> I think that most people agree that policy control is an important feature
> that fundamentally improves neutron (by solving the automation and scale
> issues). In a large project, multiple sub-projects can, and for a healthy
> project should, work in parallel. I understand that the neutron core team is
> stretched. But we still need to be able to balance the needs of today
> (paying off the technical debt/existing-issues by doing refactoring) with
> needs of tomorrow (new features like GP and LBaaS). GP effort was started in
> Havana, and now we are trying to get this in Juno. I think that is
> reasonable and a long enough cycle for a "high priority" project to be able
> to get some core attention. Again I refer to LBaaS experience, as they
> struggled with very similar issues.
>
> 4. Re: If refactored neutron was available, would a simpler option become
> more viable
> We would love to be able to answer that question. We have been trying to
> understand the refactoring work to understand this (see another ML thread)
> and we are open to understanding your position on that. We will call the
> ad-hoc meeting that you suggested and we would like to understand the
> refactoring work that might be reused for simpler policy implementation. At
> the same time, we would like to build on what is available today, and when
> the required refactored neutron becomes available (say Juno or K-release),
> we are more than happy to adapt to it at that time. Serializing all
> development around an effort that is still in inception phase is not a good
> solution. We are looking forward to participating in the core refactoring
> work, and based on the final spec that come up with, we would love to be
> able to eventually make the policy implementation simpler.
>
> Regards,
> Mandeep
>
>
>
>
> On Thu, May 22, 2014 at 11:44 AM, Armando M.  wrote:
>>
>> I would second Maru's concerns, and I would also like to add the
>> following:
>>
>> We need to acknowledge the fact that there are certain architectural
>> aspects of Neutron as a project that need to be addressed; at the
>> summit we talked about the core refactoring, a task oriented API, etc.
>> To me these items have been neglected far too much over the past and
>> would need a higher priority and a lot more attention during the Juno
>> cycle. Being stretched as we are I wonder if dev/review cycles
>> wouldn't be better spent devoting more time to these efforts rather
>> than GP.
>>
>> That said, I appreciate that GP is important and needs to move
>> forward, but at the same time I am thinking that there must be a
>> better way for addressing it and yet relieve some of the pressure that
>> GP complexity imposes to the Neutron team. One aspect it was discussed
>> at the summit was that the type of approach shown in [2] and [3]
>> below, was chosen because of lack of proper integration hooks...so I
>> am advocating: let's talk about those first before ruling them out in
>> favor of a monolithic approach that seems to violate some engineering
>> principles, like modularity and loose decoupling of system components.
>>
>> I think we didn't have enough time during the summit to iron out some
>> of the concerns voiced here, and it seems like the IRC meeting for
>> Group Policy would not be the right venue to try and establish a
>> common ground among the people driving this effort and the rest of the
>> core team.
>>
>> Shall we try and have an ad-hoc meeting and an ad-hoc agenda to find a
>> consensus?
>>
>> Many thanks,
>> Armando
>>
>> On 22 May 2014 11:38, Maru Newby  wrote:
>> >
>> > On May 22, 2014, at 11:03 AM, Maru Newby  wrote:
>> >
>> >> At the summit session last week for group-based

Re: [openstack-dev] [neutron][group-based-policy] Should we revisit the priority of group-based policy?

2014-05-23 Thread Armando M.
On 23 May 2014 12:31, Robert Kukura  wrote:
>
> On 5/23/14, 12:46 AM, Mandeep Dhami wrote:
>
> Hi Armando:
>
> Those are good points. I will let Bob Kukura chime in on the specifics of
> how we intend to do that integration. But if what you see in the
> prototype/PoC was our final design for integration with Neutron core, I
> would be worried about that too. That specific part of the code
> (events/notifications for DHCP) was done in that way just for the prototype
> - to allow us to experiment with the part that was new and needed
> experimentation, the APIs and the model.
>
> That is the exact reason that we did not initially check the code to gerrit
> - so that we do not confuse the review process with the prototype process.
> But we were requested by other cores to check in even the prototype code as
> WIP patches to allow for review of the API parts. That can unfortunately
> create this very misunderstanding. For the review, I would recommend not the
> WIP patches, as they contain the prototype parts as well, but just the final
> patches that are not marked WIP. If you such issues in that part of the
> code, please DO raise that as that would be code that we intend to upstream.
>
> I believe Bob did discuss the specifics of this integration issue with you
> at the summit, but like I said it is best if he represents that side
> himself.
>
> Armando and Mandeep,
>
> Right, we do need a workable solution for the GBP driver to invoke neutron
> API operations, and this came up at the summit.
>
> We started out in the PoC directly calling the plugin, as is currently done
> when creating ports for agents. But this is not sufficient because the DHCP
> notifications, and I think the nova notifications, are needed for VM ports.
> We also really should be generating the other notifications, enforcing
> quotas, etc. for the neutron resources.

I am at loss here: if you say that you couldn't fit at the plugin
level, that is because it is the wrong level!! Sitting above it and
redo all the glue code around it to add DHCP notifications etc
continues the bad practice within the Neutron codebase where there is
not a good separation of concerns: for instance everything is cobbled
together like the DB and plugin logic. I appreciate that some design
decisions have been made in the past, but there's no good reason for a
nice new feature like GP to continue this bad practice; this is why I
feel strongly about the current approach being taken.

>
> We could just use python-neutronclient, but I think we'd prefer to avoid the
> overhead. The neutron project already depends on python-neutronclient for
> some tests, the debug facility, and the metaplugin, so in retrospect, we
> could have easily used it in the PoC.

I am not sure I understand what overhead you mean here. Could you
clarify? Actually looking at the code, I see a mind boggling set of
interactions going back and forth between the GP plugin, the policy
driver manager, the mapping driver and the core plugin: they are all
entangled together. For instance, when creating an endpoint the GP
plugin ends up calling the mapping driver that in turns ends up calls
the GP plugin itself! If this is not overhead I don't know what is!
The way the code has been structured makes it very difficult to read,
let alone maintain and extend with other policy mappers. The ML2-like
nature of the approach taken might work well in the context of core
plugin, mechanisms drivers etc, but I would argue that it poorly
applies to the context of GP.

>
> With the existing REST code, if we could find the
> neutron.api.v2.base.Controller class instance for each resource, we could
> simply call create(), update(), delete(), and show() on these. I didn't see
> an easy way to find these Controller instances, so I threw together some
> code similar to these Controller methods for the PoC. It probably wouldn't
> take too much work to have neutron.manager.NeutronManager provide access to
> the Controller classes if we want to go this route.
>
> The core refactoring effort may eventually provide a nice solution, but we
> can't wait for this. It seems we'll need to either use python-neutronclient
> or get access to the Controller classes in the meantime.
>
> Any thoughts on these? Any other ideas?

I am still not sure why do you even need to go all the way down to the
Controller class. After all it's almost like GP could be a service in
its own right that makes use of Neutron to map the application centric
abstractions on top of the networking constructs; this can happen via
the REST interface. I don't think there is a dependency on the core
refactoring here: the two can progress separately, so long as we break
the tie, from an implementation perspective, that GP and Core plugins
need to leave in the same address space. Am I missing something?
Because I still cannot justify why things have been coded the way they
have.

Thanks,
Armando

>
> Thanks,
>
> -Bob
>
>
> Regards,
> Mandeep
>
>
>
>
> ___

Re: [openstack-dev] [neutron][group-based-policy] GP mapping driver

2014-05-24 Thread Armando M.
On 24 May 2014 05:20, Robert Kukura  wrote:
>
> On 5/23/14, 10:54 PM, Armando M. wrote:
>>
>> On 23 May 2014 12:31, Robert Kukura  wrote:
>>>
>>> On 5/23/14, 12:46 AM, Mandeep Dhami wrote:
>>>
>>> Hi Armando:
>>>
>>> Those are good points. I will let Bob Kukura chime in on the specifics of
>>> how we intend to do that integration. But if what you see in the
>>> prototype/PoC was our final design for integration with Neutron core, I
>>> would be worried about that too. That specific part of the code
>>> (events/notifications for DHCP) was done in that way just for the
>>> prototype
>>> - to allow us to experiment with the part that was new and needed
>>> experimentation, the APIs and the model.
>>>
>>> That is the exact reason that we did not initially check the code to
>>> gerrit
>>> - so that we do not confuse the review process with the prototype
>>> process.
>>> But we were requested by other cores to check in even the prototype code
>>> as
>>> WIP patches to allow for review of the API parts. That can unfortunately
>>> create this very misunderstanding. For the review, I would recommend not
>>> the
>>> WIP patches, as they contain the prototype parts as well, but just the
>>> final
>>> patches that are not marked WIP. If you such issues in that part of the
>>> code, please DO raise that as that would be code that we intend to
>>> upstream.
>>>
>>> I believe Bob did discuss the specifics of this integration issue with
>>> you
>>> at the summit, but like I said it is best if he represents that side
>>> himself.
>>>
>>> Armando and Mandeep,
>>>
>>> Right, we do need a workable solution for the GBP driver to invoke
>>> neutron
>>> API operations, and this came up at the summit.
>>>
>>> We started out in the PoC directly calling the plugin, as is currently
>>> done
>>> when creating ports for agents. But this is not sufficient because the
>>> DHCP
>>> notifications, and I think the nova notifications, are needed for VM
>>> ports.
>>> We also really should be generating the other notifications, enforcing
>>> quotas, etc. for the neutron resources.
>>
>> I am at loss here: if you say that you couldn't fit at the plugin
>> level, that is because it is the wrong level!! Sitting above it and
>> redo all the glue code around it to add DHCP notifications etc
>> continues the bad practice within the Neutron codebase where there is
>> not a good separation of concerns: for instance everything is cobbled
>> together like the DB and plugin logic. I appreciate that some design
>> decisions have been made in the past, but there's no good reason for a
>> nice new feature like GP to continue this bad practice; this is why I
>> feel strongly about the current approach being taken.
>
> Armando, I am agreeing with you! The code you saw was a proof-of-concept
> implementation intended as a learning exercise, not something intended to be
> merged as-is to the neutron code base. The approach for invoking resources
> from the driver(s) will be revisited before the driver code is submitted for
> review.
>>
>>
>>> We could just use python-neutronclient, but I think we'd prefer to avoid
>>> the
>>> overhead. The neutron project already depends on python-neutronclient for
>>> some tests, the debug facility, and the metaplugin, so in retrospect, we
>>> could have easily used it in the PoC.
>>
>> I am not sure I understand what overhead you mean here. Could you
>> clarify? Actually looking at the code, I see a mind boggling set of
>> interactions going back and forth between the GP plugin, the policy
>> driver manager, the mapping driver and the core plugin: they are all
>> entangled together. For instance, when creating an endpoint the GP
>> plugin ends up calling the mapping driver that in turns ends up calls
>> the GP plugin itself! If this is not overhead I don't know what is!
>> The way the code has been structured makes it very difficult to read,
>> let alone maintain and extend with other policy mappers. The ML2-like
>> nature of the approach taken might work well in the context of core
>> plugin, mechanisms drivers etc, but I would argue that it poorly
>> applies to the context of GP.
>
> The overhead of using python-neutronclient is that unnecessary
> serialization/deserialization are performed as well as socket communication
> through the ke

Re: [openstack-dev] [neutron][group-based-policy] GP mapping driver

2014-05-26 Thread Armando M.
On May 26, 2014 4:27 PM, "Mohammad Banikazemi"  wrote:
>
> Armando,
>
> I think there are a couple of things that are being mixed up here, at
least as I see this conversation :). The mapping driver is simply one way
of implementing GP. Ideally I would say, you do not need to implement the
GP in terms of other Neutron abstractions even though you may choose to do
so. A network controller could realize the connectivities and policies
defined by GP independent of say networks, and subnets. If we agree on this
point, then how we organize the code will be different than the case where
GP is always defined as something on top of current neutron API. In other
words, we shouldn't organize the overall code for GP based solely on the
use of the mapping driver.

The mapping driver is embedded in the policy framework that Bob had
initially proposed. If I understood what you're suggesting correctly, it
makes very little sense to diverge or come up with a different framework
alongside the legacy driver later on, otherwise we may end up in the same
state of the core plugins': monolithic vs ml2-based. Could you clarify?
>
> In the mapping driver (aka the legacy driver) for the PoC, GP is
implemented in terms of other Neutron abstractions. I agree that using
python-neutronclient for the PoC would be fine and as Bob has mentioned it
would have been probably the best/easiest way of having the PoC implemented
in the first place. The calls to python-neutronclient in my understanding
could be eventually easily replaced with direct calls after refactoring
which lead me to ask a question concerning the following part of the
conversation (being copied here again):

Not sure why we keep bringing this refactoring up: my point is that if GP
were to be implemented the way I'm suggesting the refactoring would have no
impact on GP...even if it did, replacing remote with direct calls should be
avoided IMO.

>
>
> [Bob:]
>
> > > The overhead of using python-neutronclient is that unnecessary
> > > serialization/deserialization are performed as well as socket
communication
> > > through the kernel. This is all required between processes, but not
within a
> > > single process. A well-defined and efficient mechanism to invoke
resource
> > > APIs within the process, with the same semantics as incoming REST
calls,
> > > seems like a generally useful addition to neutron. I'm hopeful the
core
> > > refactoring effort will provide this (and am willing to help make
sure it
> > > does), but we need something we can use until that is available.
> > >
>
> [Armando:]
>
> > I appreciate that there is a cost involved in relying on distributed
> > communication, but this must be negligible considered what needs to
> > happen end-to-end. If the overhead being referred here is the price to
> > pay for having a more dependable system (e.g. because things can be
> > scaled out and/or made reliable independently), then I think this is a
> > price worth paying.
> >
> > I do hope that the core refactoring is not aiming at what you're
> > suggesting, as it sounds in exact opposition to some of the OpenStack
> > design principles.
>
>
> From the summit sessions (in particular the session by Mark on
refactoring the core), I too was under the impression that there will be a
way of invoking Neutron API within the plugin with the same semantics as
through the REST API. Is this a misunderstanding?

That was not my understanding, but I'll let Mark chime in on this.

Many thanks
Armando
>
> Best,
>
> Mohammad
>
>
>
>
>
>
>
> "Armando M."  wrote on 05/24/2014 01:36:35 PM:
>
> > From: "Armando M." 
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > ,
> > Date: 05/24/2014 01:38 PM
> > Subject: Re: [openstack-dev] [neutron][group-based-policy] GP mapping
driver
>
> >
> > On 24 May 2014 05:20, Robert Kukura  wrote:
> > >
> > > On 5/23/14, 10:54 PM, Armando M. wrote:
> > >>
> > >> On 23 May 2014 12:31, Robert Kukura  wrote:
> > >>>
> > >>> On 5/23/14, 12:46 AM, Mandeep Dhami wrote:
> > >>>
> > >>> Hi Armando:
> > >>>
> > >>> Those are good points. I will let Bob Kukura chime in on the
specifics of
> > >>> how we intend to do that integration. But if what you see in the
> > >>> prototype/PoC was our final design for integration with Neutron
core, I
> > >>> would be worried about that too. That specific part of the code
> > >>> (events/notifications for DHCP) was done in that way just for the
> > >>> prototyp

Re: [openstack-dev] [neutron][group-based-policy] GP mapping driver

2014-05-27 Thread Armando M.
Hi Mohammad,

Thanks, I understand now. I appreciate that the mapping driver is one way
of doing things and that the design has been familiarized for a while. I
wish I could follow infinite channels but unfortunately the openstack
information overload is astounding and sometimes I fail :) Gerrit is the
channel I strive to follow and this is when I saw the code for the first
time, hence my feedback.

It's worth noting that the PoC design document is (as it should be) very
high level and most of my feedback applies to the implementation decisions
being made. That said, I still have doubts that an ML2 like approach is
really necessary for GP and I welcome inputs to help me change my mind :)

Thanks
Armando
On May 27, 2014 5:04 PM, "Mohammad Banikazemi"  wrote:

> Thanks for the continued interest in discussing Group Policy (GP). I
> believe these discussions with the larger Neutron community can benefit the
> GP work.
>
> GP like any other Neutron extension can have different implementations.
> Our idea has been to have the GP code organized similar to how ML2 and
> mechanism drivers are organized, with the possibility of having different
> drivers for realizing the GP API. One such driver (analogous to an ML2
> mechanism driver I would say) is the mapping driver that was implemented
> for the PoC. I certainly do not see it as the only implementation. The
> mapping driver is just the driver we used for our PoC implementation in
> order to gain experience in developing such a driver. Hope this clarifies
> things a bit.
>
> Please note that for better or worse we have produced several documents
> during the previous cycle. We have tried to collect them on the GP wiki
> page [1]. The latest design document [2] should give a broad view of the GP
> extension and the model being proposed. The PoC document [3] may clarify
> our PoC plans and where the mapping driver stands wrt other pieces of the
> work.  (Please note some parts of the plan as described in the PoC document
> was not implemented.)
>
> Hope my explanation and these documents (and other documents available on
> the GP wiki) are helpful.
>
> Best,
>
> Mohammad
>
> [1] https://wiki.openstack.org/wiki/Neutron/GroupPolicy   <- GP wiki
> page
> [2]
> https://docs.google.com/presentation/d/1Nn1HjghAvk2RTPwvltSrnCUJkidWKWY2ckU7OYAVNpo/
><- GP design document
> [3]
> https://docs.google.com/document/d/14UyvBkptmrxB9FsWEP8PEGv9kLqTQbsmlRxnqeF9Be8/
><- GP PoC document
>
>
> [image: Inactive hide details for "Armando M." ---05/26/2014 09:46:34
> PM---On May 26, 2014 4:27 PM, "Mohammad Banikazemi"  M." ---05/26/2014 09:46:34 PM---On May 26, 2014 4:27 PM, "Mohammad
> Banikazemi"  wrote: >
>
> From: "Armando M." 
> To: "OpenStack Development Mailing List, (not for usage questions)" <
> openstack-dev@lists.openstack.org>,
> Date: 05/26/2014 09:46 PM
> Subject: Re: [openstack-dev] [neutron][group-based-policy] GP mapping
> driver
> --
>
>
>
>
> On May 26, 2014 4:27 PM, "Mohammad Banikazemi" 
> <*m...@us.ibm.com*>
> wrote:
> >
> > Armando,
> >
> > I think there are a couple of things that are being mixed up here, at
> least as I see this conversation :). The mapping driver is simply one way
> of implementing GP. Ideally I would say, you do not need to implement the
> GP in terms of other Neutron abstractions even though you may choose to do
> so. A network controller could realize the connectivities and policies
> defined by GP independent of say networks, and subnets. If we agree on this
> point, then how we organize the code will be different than the case where
> GP is always defined as something on top of current neutron API. In other
> words, we shouldn't organize the overall code for GP based solely on the
> use of the mapping driver.
>
> The mapping driver is embedded in the policy framework that Bob had
> initially proposed. If I understood what you're suggesting correctly, it
> makes very little sense to diverge or come up with a different framework
> alongside the legacy driver later on, otherwise we may end up in the same
> state of the core plugins': monolithic vs ml2-based. Could you clarify?
> >
> > In the mapping driver (aka the legacy driver) for the PoC, GP is
> implemented in terms of other Neutron abstractions. I agree that using
> python-neutronclient for the PoC would be fine and as Bob has mentioned it
> would have been probably the best/easiest way of having the PoC implemented
> in the first place. The calls to python-neutronclient in my understanding
> could be eventually easily replaced with direct calls after refact

Re: [openstack-dev] [neutron][L3] VM Scheduling v/s Network as input any consideration ?

2014-05-28 Thread Armando M.
Hi Keshava,

To the best of my knowledge Nova does not have an explicit way to determine
VM placements based on network attributes. That said, it does have a
general mechanism called host-aggregates [1] that can be leveraged to
address what you are looking for. How certain hosts are grouped together to
match certain network affinity rules is in the hands of the cloud operator
and I believe this requires quite a bit of out-of-band management.

I recall at one point that there was an effort going on to improve the
usability of such a use case (using the port binding extension [2]), but my
knowledge is not very current, so I'd need to fall back on some other folks
listening on the ML to chime in on the latter topic.

Hope this help!
Armando

[1] -
http://docs.openstack.org/trunk/openstack-ops/content/scaling.html#segregate_cloud
[2] -
http://docs.openstack.org/api/openstack-network/2.0/content/binding_ext_ports.html


On 27 May 2014 09:53, A, Keshava  wrote:

>  Hi,
>
> I have one of the basic question about the Nova Scheduler in the following
> below scenario.
>
> Whenever a new VM to be hosted is there any consideration of network
> attributes ?
>
> Example let us say all the VMs with 10.1.x is under TOR-1, and 20.1.xy are
> under TOR-2.
>
> A new CN nodes is inserted under TOR-2 and at same time a new  tenant VM
> needs to be  hosted for 10.1.xa network.
>
>
>
> Then is it possible to mandate the new VM(10.1.xa)   to hosted under TOR-1
> instead of it got scheduled under TOR-2 ( where there CN-23 is completely
> free from resource perspective ) ?
>
> This is required to achieve prefix/route aggregation and to avoid network
> broadcast (incase if they are scattered across different TOR/Switch) ?
>
>
>
>
>
>
>
>
>
> Thanks & regards,
>
> Keshava.A
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Supporting retries in neutronclient

2014-05-29 Thread Armando M.
Hi Paul,

Just out of curiosity, I am assuming you are using the client that
still relies on httplib2. Patch [1] replaced httplib2 with requests,
but I believe that a new client that incorporates this change has not
yet been published. I wonder if the failures you are referring to
manifest themselves with the former http library rather than the
latter. Could you clarify?

Thanks,
Armando

[1] - https://review.openstack.org/#/c/89879/

On 29 May 2014 17:25, Paul Ward  wrote:
> Well, for my specific error, it was an intermittent ssl handshake error
> before the request was ever sent to the
> neutron-server.  In our case, we saw that 4 out of 5 resize operations
> worked, the fifth failed with this ssl
> handshake error in neutronclient.
>
> I certainly think a GET is safe to retry, and I agree with your statement
> that PUTs and DELETEs probably
> are as well.  This still leaves a change in nova needing to be made to
> actually a) specify a conf option and
> b) pass it to neutronclient where appropriate.
>
>
> Aaron Rosen  wrote on 05/28/2014 07:38:56 PM:
>
>> From: Aaron Rosen 
>
>
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> ,
>> Date: 05/28/2014 07:44 PM
>
>> Subject: Re: [openstack-dev] [neutron] Supporting retries in neutronclient
>>
>> Hi,
>>
>> I'm curious if other openstack clients implement this type of retry
>> thing. I think retrying on GET/DELETES/PUT's should probably be okay.
>>
>> What types of errors do you see in the neutron-server when it fails
>> to respond? I think it would be better to move the retry logic into
>> the server around the failures rather than the client (or better yet
>> if we fixed the server :)). Most of the times I've seen this type of
>> failure is due to deadlock errors caused between (sqlalchemy and
>> eventlet *i think*) which cause the client to eventually timeout.
>>
>> Best,
>>
>> Aaron
>>
>
>> On Wed, May 28, 2014 at 11:51 AM, Paul Ward  wrote:
>> Would it be feasible to make the retry logic only apply to read-only
>> operations?  This would still require a nova change to specify the
>> number of retries, but it'd also prevent invokers from shooting
>> themselves in the foot if they call for a write operation.
>>
>>
>>
>> Aaron Rosen  wrote on 05/27/2014 09:40:00 PM:
>>
>> > From: Aaron Rosen 
>>
>> > To: "OpenStack Development Mailing List (not for usage questions)"
>> > ,
>> > Date: 05/27/2014 09:44 PM
>>
>> > Subject: Re: [openstack-dev] [neutron] Supporting retries in
>> > neutronclient
>> >
>> > Hi,
>>
>> >
>> > Is it possible to detect when the ssl handshaking error occurs on
>> > the client side (and only retry for that)? If so I think we should
>> > do that rather than retrying multiple times. The danger here is
>> > mostly for POST operations (as Eugene pointed out) where it's
>> > possible for the response to not make it back to the client and for
>> > the operation to actually succeed.
>> >
>> > Having this retry logic nested in the client also prevents things
>> > like nova from handling these types of failures individually since
>> > this retry logic is happening inside of the client. I think it would
>> > be better not to have this internal mechanism in the client and
>> > instead make the user of the client implement retry so they are
>> > aware of failures.
>> >
>> > Aaron
>> >
>>
>> > On Tue, May 27, 2014 at 10:48 AM, Paul Ward  wrote:
>> > Currently, neutronclient is hardcoded to only try a request once in
>> > retry_request by virtue of the fact that it uses self.retries as the
>> > retry count, and that's initialized to 0 and never changed.  We've
>> > seen an issue where we get an ssl handshaking error intermittently
>> > (seems like more of an ssl bug) and a retry would probably have
>> > worked.  Yet, since neutronclient only tries once and gives up, it
>> > fails the entire operation.  Here is the code in question:
>> >
>> > https://github.com/openstack/python-neutronclient/blob/master/
>> > neutronclient/v2_0/client.py#L1296
>> >
>> > Does anybody know if there's some explicit reason we don't currently
>> > allow configuring the number of retries?  If not, I'm inclined to
>> > propose a change for just that.
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___

Re: [openstack-dev] [neutron] Supporting retries in neutronclient

2014-05-29 Thread Armando M.
mishandling of SSL was the very reason why I brought that change
forward; so I wouldn't rule it out completely ;)

A.

On 29 May 2014 19:15, Paul Ward  wrote:
> Yes, we're still on a code level that uses httplib2.  I noticed that as
> well, but wasn't sure if that would really
> help here as it seems like an ssl thing itself.  But... who knows??  I'm not
> sure how consistently we can
> recreate this, but if we can, I'll try using that patch to use requests and
> see if that helps.
>
>
>
> "Armando M."  wrote on 05/29/2014 11:52:34 AM:
>
>> From: "Armando M." 
>
>
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> ,
>> Date: 05/29/2014 11:58 AM
>
>> Subject: Re: [openstack-dev] [neutron] Supporting retries in neutronclient
>>
>> Hi Paul,
>>
>> Just out of curiosity, I am assuming you are using the client that
>> still relies on httplib2. Patch [1] replaced httplib2 with requests,
>> but I believe that a new client that incorporates this change has not
>> yet been published. I wonder if the failures you are referring to
>> manifest themselves with the former http library rather than the
>> latter. Could you clarify?
>>
>> Thanks,
>> Armando
>>
>> [1] - https://review.openstack.org/#/c/89879/
>>
>> On 29 May 2014 17:25, Paul Ward  wrote:
>> > Well, for my specific error, it was an intermittent ssl handshake error
>> > before the request was ever sent to the
>> > neutron-server.  In our case, we saw that 4 out of 5 resize operations
>> > worked, the fifth failed with this ssl
>> > handshake error in neutronclient.
>> >
>> > I certainly think a GET is safe to retry, and I agree with your
>> > statement
>> > that PUTs and DELETEs probably
>> > are as well.  This still leaves a change in nova needing to be made to
>> > actually a) specify a conf option and
>> > b) pass it to neutronclient where appropriate.
>> >
>> >
>> > Aaron Rosen  wrote on 05/28/2014 07:38:56 PM:
>> >
>> >> From: Aaron Rosen 
>> >
>> >
>> >> To: "OpenStack Development Mailing List (not for usage questions)"
>> >> ,
>> >> Date: 05/28/2014 07:44 PM
>> >
>> >> Subject: Re: [openstack-dev] [neutron] Supporting retries in
>> >> neutronclient
>> >>
>> >> Hi,
>> >>
>> >> I'm curious if other openstack clients implement this type of retry
>> >> thing. I think retrying on GET/DELETES/PUT's should probably be okay.
>> >>
>> >> What types of errors do you see in the neutron-server when it fails
>> >> to respond? I think it would be better to move the retry logic into
>> >> the server around the failures rather than the client (or better yet
>> >> if we fixed the server :)). Most of the times I've seen this type of
>> >> failure is due to deadlock errors caused between (sqlalchemy and
>> >> eventlet *i think*) which cause the client to eventually timeout.
>> >>
>> >> Best,
>> >>
>> >> Aaron
>> >>
>> >
>> >> On Wed, May 28, 2014 at 11:51 AM, Paul Ward  wrote:
>> >> Would it be feasible to make the retry logic only apply to read-only
>> >> operations?  This would still require a nova change to specify the
>> >> number of retries, but it'd also prevent invokers from shooting
>> >> themselves in the foot if they call for a write operation.
>> >>
>> >>
>> >>
>> >> Aaron Rosen  wrote on 05/27/2014 09:40:00 PM:
>> >>
>> >> > From: Aaron Rosen 
>> >>
>> >> > To: "OpenStack Development Mailing List (not for usage questions)"
>> >> > ,
>> >> > Date: 05/27/2014 09:44 PM
>> >>
>> >> > Subject: Re: [openstack-dev] [neutron] Supporting retries in
>> >> > neutronclient
>> >> >
>> >> > Hi,
>> >>
>> >> >
>> >> > Is it possible to detect when the ssl handshaking error occurs on
>> >> > the client side (and only retry for that)? If so I think we should
>> >> > do that rather than retrying multiple times. The danger here is
>> >> > mostly for POST operations (as Eugene pointed out) where it's
>> >> > possible for the response to not make it back to the client and for
>> >

Re: [openstack-dev] [Neutron] - Location for common third-party libs?

2014-06-16 Thread Armando M.
I believe the Brocade's mech driver might have the same problem.

That said, if the content of the rpm that installs the BigSwitch plugin is
just the sub-tree for bigswitch (plus the config files, perhaps), you might
get away with this issue by just installing the bigswitch-plugin package. I
assume you tried that and didn't work?

I was unable to find the rpm specs for CentOS to confirm.

A.


On 17 June 2014 00:02, Kevin Benton  wrote:

> Hello,
>
> In the Big Switch ML2 driver, we rely on quite a bit of code from the Big
> Switch plugin. This works fine for distributions that include the entire
> neutron code base. However, some break apart the neutron code base into
> separate packages. For example, in CentOS I can't use the Big Switch ML2
> driver with just ML2 installed because the Big Switch plugin directory is
> gone.
>
> Is there somewhere where we can put common third party code that will be
> safe from removal during packaging?
>
>
> Thanks
> --
> Kevin Benton
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] - Location for common third-party libs?

2014-06-17 Thread Armando M.
I don't think that a common area as being proposed is a silver bullet for
solving packaging issues, such as this one. Knowing that the right source
tree bits are dropped onto the file system is not enough to guarantee that
the end-to-end solution will work on a specific distro. Other issues may
arise after configuration and execution.

IMO, this is a bug in the packages spec, and should be taken care of during
the packaging implementation, testing and validation.

That said, I think the right approach is to provide a 'python-neutron'
package that installs the entire source tree; the specific plugin package
can then take care of the specifics, like config files.

Armando


On 17 June 2014 06:43, Shiv Haris  wrote:

> Right Armando.
>
> Brocade’s mech driver problem is due to NETCONF templates - would also
> prefer to see a common area for such templates – not just common code.
>
> Sort of like:
>
> common/brocade/templates
> common/bigswitch/*
>
> -Shiv
> From: "Armando M." 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [Neutron] - Location for common third-party
> libs?
>
> I believe the Brocade's mech driver might have the same problem.
>
> That said, if the content of the rpm that installs the BigSwitch plugin is
> just the sub-tree for bigswitch (plus the config files, perhaps), you might
> get away with this issue by just installing the bigswitch-plugin package. I
> assume you tried that and didn't work?
>
> I was unable to find the rpm specs for CentOS to confirm.
>
> A.
>
>
> On 17 June 2014 00:02, Kevin Benton  wrote:
>
>> Hello,
>>
>> In the Big Switch ML2 driver, we rely on quite a bit of code from the Big
>> Switch plugin. This works fine for distributions that include the entire
>> neutron code base. However, some break apart the neutron code base into
>> separate packages. For example, in CentOS I can't use the Big Switch ML2
>> driver with just ML2 installed because the Big Switch plugin directory is
>> gone.
>>
>> Is there somewhere where we can put common third party code that will be
>> safe from removal during packaging?
>>
>>
>> Thanks
>> --
>> Kevin Benton
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A modest proposal to reduce reviewer load

2014-06-17 Thread Armando M.
I wonder what the turnaround of trivial patches actually is, I bet you it's
very very small, and as Daniel said, the human burden is rather minimal (I
would be more concerned about slowing them down in the gate, but I digress).

I think that introducing a two-tier level for patch approval can only
mitigate the problem, but I wonder if we'd need to go a lot further, and
rather figure out a way to borrow concepts from queueing theory so that
they can be applied in the context of Gerrit. For instance Little's law [1]
says:

"The long-term average number of customers (in this context *reviews*) in a
stable system L is equal to the long-term average effective arrival rate,
λ, multiplied by the average time a customer spends in the system, W; or
expressed algebraically: L = λW."

L can be used to determine the number of core reviewers that a project will
need at any given time, in order to meet a certain arrival rate and average
time spent in the queue. If the number of core reviewers is a lot less than
L then that core team is understaffed and will need to increase.

If we figured out how to model and measure Gerrit as a queuing system, then
we could improve its performance a lot more effectively; for instance, this
idea of privileging trivial patches over longer patches has roots in a
popular scheduling policy [3] for  M/G/1 queues, but that does not really
help aging of 'longer service time' patches and does not have a preemption
mechanism built-in to avoid starvation.

Just a crazy opinion...
Armando

[1] - http://en.wikipedia.org/wiki/Little's_law
[2] - http://en.wikipedia.org/wiki/Shortest_job_first
[3] - http://en.wikipedia.org/wiki/M/G/1_queue


On 17 June 2014 14:12, Matthew Booth  wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 17/06/14 12:36, Sean Dague wrote:
> > On 06/17/2014 07:23 AM, Daniel P. Berrange wrote:
> >> On Tue, Jun 17, 2014 at 11:04:17AM +0100, Matthew Booth wrote:
> >>> We all know that review can be a bottleneck for Nova
> >>> patches.Not only that, but a patch lingering in review, no
> >>> matter how trivial, will eventually accrue rebases which sap
> >>> gate resources, developer time, and will to live.
> >>>
> >>> It occurs to me that there are a significant class of patches
> >>> which simply don't require the attention of a core reviewer.
> >>> Some examples:
> >>>
> >>> * Indentation cleanup/comment fixes * Simple code motion * File
> >>> permission changes * Trivial fixes which are obviously correct
> >>>
> >>> The advantage of a core reviewer is that they have experience
> >>> of the whole code base, and have proven their ability to make
> >>> and judge core changes. However, some fixes don't require this
> >>> level of attention, as they are self-contained and obvious to
> >>> any reasonable programmer.
> >>>
> >>> Without knowing anything of the architecture of gerrit, I
> >>> propose something along the lines of a '+1 (trivial)' review
> >>> flag. If a review gained some small number of these, I suggest
> >>> 2 would be reasonable, it would be equivalent to a +2 from a
> >>> core reviewer. The ability to set this flag would be a
> >>> privilege. However, the bar to gaining this privilege would be
> >>> low, and preferably automatically set, e.g. 5 accepted patches.
> >>> It would be removed for abuse.
> >>>
> >>> Is this practical? Would it help?
> >>
> >> You are right that some types of fix are so straightforward that
> >> most reasonable programmers can validate them. At the same time
> >> though, this means that they also don't really consume
> >> significant review time from core reviewers.  So having
> >> non-cores' approve trivial fixes wouldn't really reduce the
> >> burden on core devs.
> >>
> >> The main positive impact would probably be a faster turn around
> >> time on getting the patches approved because it is easy for the
> >> trivial fixes to drown in the noise.
> >>
> >> IME any non-trivial change to gerrit is just not going to happen
> >> in any reasonably useful timeframe though. Perhaps an
> >> alternative strategy would be to focus on identifying which the
> >> trivial fixes are. If there was an good way to get a list of all
> >> pending trivial fixes, then it would make it straightforward for
> >> cores to jump in and approve those simple patches as a priority,
> >> to avoid them languishing too long.
> >>
> >> If would be nice if gerrit had simple keyword tagging so any
> >> reviewer can tag an existing commit as "trivial", but that
> >> doesn't seem to exist as a concept yet.
> >>
> >> So an alternative perhaps submit trivial stuff using a well
> >> known topic eg
> >>
> >> # git review --topic trivial
> >>
> >> Then you can just query all changes in that topic to find easy
> >> stuff to approve.
> >
> > It could go in the commit message:
> >
> > TrivialFix
> >
> > Then could be queried with -
> > https://review.openstack.org/#/q/message:TrivialFix,n,z
> >
> > If a reviewer felt it wasn't a trivial fix, they could just edit
> > the co

Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture

2014-06-17 Thread Armando M.
just a provocative thought: If we used the ovsdb connection instead, do we
really need an L2 agent :P?


On 17 June 2014 18:38, Kyle Mestery  wrote:

> Another area of improvement for the agent would be to move away from
> executing CLIs for port commands and instead use OVSDB. Terry Wilson
> and I talked about this, and re-writing ovs_lib to use an OVSDB
> connection instead of the CLI methods would be a huge improvement
> here. I'm not sure if Terry was going to move forward with this, but
> I'd be in favor of this for Juno if he or someone else wants to move
> in this direction.
>
> Thanks,
> Kyle
>
> On Tue, Jun 17, 2014 at 11:24 AM, Salvatore Orlando 
> wrote:
> > We've started doing this in a slightly more reasonable way for icehouse.
> > What we've done is:
> > - remove unnecessary notification from the server
> > - process all port-related events, either trigger via RPC or via monitor
> in
> > one place
> >
> > Obviously there is always a lot of room for improvement, and I agree
> > something along the lines of what Zang suggests would be more
> maintainable
> > and ensure faster event processing as well as making it easier to have
> some
> > form of reliability on event processing.
> >
> > I was considering doing something for the ovs-agent again in Juno, but
> since
> > we've moving towards a unified agent, I think any new "big" ticket should
> > address this effort.
> >
> > Salvatore
> >
> >
> > On 17 June 2014 13:31, Zang MingJie  wrote:
> >>
> >> Hi:
> >>
> >> Awesome! Currently we are suffering lots of bugs in ovs-agent, also
> >> intent to rebuild a more stable flexible agent.
> >>
> >> Taking the experience of ovs-agent bugs, I think the concurrency
> >> problem is also a very important problem, the agent gets lots of event
> >> from different greenlets, the rpc, the ovs monitor or the main loop.
> >> I'd suggest to serialize all event to a queue, then process events in
> >> a dedicated thread. The thread check the events one by one ordered,
> >> and resolve what has been changed, then apply the corresponding
> >> changes. If there is any error occurred in the thread, discard the
> >> current processing event, do a fresh start event, which reset
> >> everything, then apply the correct settings.
> >>
> >> The threading model is so important and may prevent tons of bugs in
> >> the future development, we should describe it clearly in the
> >> architecture
> >>
> >>
> >> On Wed, Jun 11, 2014 at 4:19 AM, Mohammad Banikazemi 
> >> wrote:
> >> > Following the discussions in the ML2 subgroup weekly meetings, I have
> >> > added
> >> > more information on the etherpad [1] describing the proposed
> >> > architecture
> >> > for modular L2 agents. I have also posted some code fragments at [2]
> >> > sketching the implementation of the proposed architecture. Please
> have a
> >> > look when you get a chance and let us know if you have any comments.
> >> >
> >> > [1] https://etherpad.openstack.org/p/modular-l2-agent-outline
> >> > [2] https://review.openstack.org/#/c/99187/
> >> >
> >> >
> >> > ___
> >> > OpenStack-dev mailing list
> >> > OpenStack-dev@lists.openstack.org
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture

2014-06-17 Thread Armando M.
Mine wasn't really a serious suggestion, Neutron's controlling logic is
already bloated as it is, and my personal opinion would be in favor of a
leaner Neutron Server rather than a more complex one; adding more
controller-like logic to it certainly goes against that direction :)

Having said that and as Vivek pointed out, using ovsdb gives us finer
control and ability to react more effectively, however, with the current
server-agent rpc framework there's no way of leveraging that...so in a
grand scheme of things I'd rather see it prioritized lower rather than
higher, to give precedence to rearchitecting the framework first.

Armando


On 17 June 2014 19:25, Narasimhan, Vivekanandan <
vivekanandan.narasim...@hp.com> wrote:

>
>
> Managing the ports and plumbing logic is today driven by L2 Agent, with
> little assistance
>
> from controller.
>
>
>
> If we plan to move that functionality to the controller,  the controller
> has to be more
>
> heavy weight (both hardware and software)  since it has to do the job of
> L2 Agent for all
>
> the compute servers in the cloud. , We need to re-verify all scale numbers
> for the controller
>
> on POC’ing of such a change.
>
>
>
> That said, replacing CLI with direct OVSDB calls in the L2 Agent is
> certainly a good direction.
>
>
>
> Today, OVS Agent invokes flow calls of OVS-Lib but has no idea (or
> processing) to follow up
>
> on success or failure of such invocations.  Nor there is certain guarantee
> that all such
>
> flow invocations would be executed by the third-process fired by OVS-Lib
> to execute CLI.
>
>
>
> When we transition to OVSDB calls which are more programmatic in nature,
> we can
>
> enhance the Flow API (OVS-Lib) to provide more fine grained errors/return
> codes (or content)
>
> and ovs-agent (and even other components) can act on such return state
> more
>
> intelligently/appropriately.
>
>
>
> --
>
> Thanks,
>
>
>
> Vivek
>
>
>
>
>
> *From:* Armando M. [mailto:arma...@gmail.com]
> *Sent:* Tuesday, June 17, 2014 10:26 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Neutron][ML2] Modular L2 agent
> architecture
>
>
>
> just a provocative thought: If we used the ovsdb connection instead, do we
> really need an L2 agent :P?
>
>
>
> On 17 June 2014 18:38, Kyle Mestery  wrote:
>
> Another area of improvement for the agent would be to move away from
> executing CLIs for port commands and instead use OVSDB. Terry Wilson
> and I talked about this, and re-writing ovs_lib to use an OVSDB
> connection instead of the CLI methods would be a huge improvement
> here. I'm not sure if Terry was going to move forward with this, but
> I'd be in favor of this for Juno if he or someone else wants to move
> in this direction.
>
> Thanks,
> Kyle
>
>
> On Tue, Jun 17, 2014 at 11:24 AM, Salvatore Orlando 
> wrote:
> > We've started doing this in a slightly more reasonable way for icehouse.
> > What we've done is:
> > - remove unnecessary notification from the server
> > - process all port-related events, either trigger via RPC or via monitor
> in
> > one place
> >
> > Obviously there is always a lot of room for improvement, and I agree
> > something along the lines of what Zang suggests would be more
> maintainable
> > and ensure faster event processing as well as making it easier to have
> some
> > form of reliability on event processing.
> >
> > I was considering doing something for the ovs-agent again in Juno, but
> since
> > we've moving towards a unified agent, I think any new "big" ticket should
> > address this effort.
> >
> > Salvatore
> >
> >
> > On 17 June 2014 13:31, Zang MingJie  wrote:
> >>
> >> Hi:
> >>
> >> Awesome! Currently we are suffering lots of bugs in ovs-agent, also
> >> intent to rebuild a more stable flexible agent.
> >>
> >> Taking the experience of ovs-agent bugs, I think the concurrency
> >> problem is also a very important problem, the agent gets lots of event
> >> from different greenlets, the rpc, the ovs monitor or the main loop.
> >> I'd suggest to serialize all event to a queue, then process events in
> >> a dedicated thread. The thread check the events one by one ordered,
> >> and resolve what has been changed, then apply the corresponding
> >> changes. If there is any error occurred in the thread, discard the
> >> current processing event, do a fresh start event, which reset
> >> eve

Re: [openstack-dev] [Neutron] minimal scope covered by third-party testing

2014-04-04 Thread Armando M.
Hi Simon,

You are absolutely right in your train of thoughts: unless the
third-party CI monitors and vets all the potential changes it cares
about there's always a chance something might break. This is why I
think it's important that each Neutron third party CI should not only
test Neutron changes, but also Nova's, DevStack's and Tempest's.
Filters may be added to test only the relevant subtrees.

For instance, the VMware CI runs the full suite of tempest smoke
tests, as they come from upstream and it vets all the changes that go
in Tempest made to API and scenario tests as well as configuration
changes. As for Nova, we test changes to the vif parts, and for
DevStack, we validate changes made to lib/neutron*.

Vetting all the changes coming in VS only the ones that can
potentially break third-party support is a balancing act when you
don't have infinite resources at your disposal, or you're just ramping
up the CI infrastructure.

Cheers,
Armando

On 4 April 2014 02:00, Simon Pasquier  wrote:
> Hi Salvatore,
>
> On 03/04/2014 14:56, Salvatore Orlando wrote:
>> Hi Simon,
>>
> 
>>
>> I hope stricter criteria will be enforced for Juno; I personally think
>> every CI should run at least the smoketest suite for L2/L3 services (eg:
>> load balancer scenario will stay optional).
>
> I had a little thinking about this and I feel like it might not have
> caught _immediately_ the issue Kyle talked about [1].
>
> Let's rewind the time line:
> 1/ Change to *Nova* adding external events API is merged
> https://review.openstack.org/#/c/76388/
> 2/ Change to *Neutron* notifying Nova when ports are ready is merged
> https://review.openstack.org/#/c/75253/
> 3/ Change to *Nova* making libvirt wait for Neutron notifications is merged
> https://review.openstack.org/#/c/74832/
>
> At this point and assuming that the external ODL CI system were running
> the L2/L3 smoke tests, change #3 could have passed since external
> Neutron CI aren't voting for Nova. Instead it would have voted against
> any subsequent change to Neutron.
>
> Simon
>
> [1] https://bugs.launchpad.net/neutron/+bug/1301449
>
>>
>> Salvatore
>>
>> [1] https://review.openstack.org/#/c/75304/
>>
>>
>>
>> On 3 April 2014 12:28, Simon Pasquier > > wrote:
>>
>> Hi,
>>
>> I'm looking at [1] but I see no requirement of which Tempest tests
>> should be executed.
>>
>> In particular, I'm a bit puzzled that it is not mandatory to boot an
>> instance and check that it gets connected to the network. To me, this is
>> the very minimum for asserting that your plugin or driver is working
>> with Neutron *and* Nova (I'm not even talking about security groups). I
>> had a quick look at the existing 3rd party CI systems and I found none
>> running this kind of check (correct me if I'm wrong).
>>
>> Thoughts?
>>
>> [1] https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers
>> --
>> Simon Pasquier
>> Software Engineer (OpenStack Expertise Center)
>> Bull, Architect of an Open World
>> Phone: + 33 4 76 29 71 49 
>> http://www.bull.com
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [infra] Python 2.6 tests can't possibly be passing in neutron

2014-09-22 Thread Armando M.
What about:

https://github.com/openstack/neutron/blob/master/test-requirements.txt#L12



On 22 September 2014 10:23, Kevin L. Mitchell
 wrote:
> My team just ran into an issue where neutron was not passing unit tests
> when run under Python 2.6.  We tracked this down to a test support
> function using collections.OrderedDict.  This was in locally forked
> code, but when I compared it to upstream code, I found that the code in
> upstream neutron is identical…meaning that upstream neutron cannot
> possibly be passing unit tests under Python 2.6.  Yet, somehow, the
> neutron reviews I've looked at are passing the Python 2.6 gate!  Any
> ideas as to how this could be happening?
>
> For the record, the problem is in neutron/tests/unit/test_api_v2.py:148,
> in the function _get_collection_kwargs(), which uses
> collections.OrderedDict.  As there's no reason to use OrderedDict here
> that I can see—there's no definite order on the initialization, and all
> consumers pass it to an assert_called_once_with() method with the '**'
> operator—I have proposed a review[1] to replace it with a simple dict.
>
> [1] https://review.openstack.org/#/c/123189/
> --
> Kevin L. Mitchell 
> Rackspace
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [infra] Python 2.6 tests can't possibly be passing in neutron

2014-09-22 Thread Armando M.
I suspect that the very reason underlying the existence of this thread
is that some users out there are not quite ready to pull the plug on
Python 2.6.

Any decision about stopping the support of Python 2.6 should not be
taken solely on making the developer's life easier, but maybe I am
stating the obvious.

Thanks,
Armando

On 22 September 2014 11:39, Solly Ross  wrote:
> I'm in favor of killing Python 2.6 with fire.
> Honestly, I think it's hurting code readability and productivity --
>
> You have to constantly think about whether or not some feature that
> the rest of the universe is already using is supported in Python 2.6
> whenever you write code.
>
> As for readability, things like 'contextlib.nested' can go away if we can
> kill Python 2.6 (Python 2.7 supports nested context managers OOTB, in a much
> more readable way).
>
> Best Regards,
> Solly
>
> - Original Message -
>> From: "Joshua Harlow" 
>> To: "OpenStack Development Mailing List (not for usage questions)" 
>> 
>> Sent: Monday, September 22, 2014 2:33:16 PM
>> Subject: Re: [openstack-dev] [neutron] [infra] Python 2.6 tests can't 
>> possibly be passing in neutron
>>
>> Just as an update to what exactly is RHEL python 2.6...
>>
>> This is the expanded source rpm:
>>
>> http://paste.ubuntu.com/8405074/
>>
>> The main one here appears to be:
>>
>> - python-2.6.6-ordereddict-backport.patch
>>
>> Full changelog @ http://paste.ubuntu.com/8405082/
>>
>> Overall I'd personally like to get rid of python 2.6, and move on, but then
>> I'd also like to get rid of 2.7 and move on also ;)
>>
>> - Josh
>>
>> On Sep 22, 2014, at 11:17 AM, Monty Taylor  wrote:
>>
>> > On 09/22/2014 10:58 AM, Kevin L. Mitchell wrote:
>> >> On Mon, 2014-09-22 at 10:32 -0700, Armando M. wrote:
>> >>> What about:
>> >>>
>> >>> https://github.com/openstack/neutron/blob/master/test-requirements.txt#L12
>> >>
>> >> Pulling in ordereddict doesn't do anything if your code doesn't use it
>> >> when OrderedDict isn't in collections, which is the case here.  Further,
>> >> there's no reason that _get_collection_kwargs() needs to use an
>> >> OrderedDict: it's initialized in an arbitrary order (generator
>> >> comprehension over a set), then later passed to functions with **, which
>> >> converts it to a plain old dict.
>> >>
>> >
>> > So - as an update to this, this is due to RedHat once again choosing to
>> > backport features from 2.7 into a thing they have labeled 2.6.
>> >
>> > We test 2.6 on Centos6 - which means we get RedHat's patched version of
>> > Python2.6 - which, it turns out, isn't really 2.6 - so while you might
>> > want to assume that we're testing 2.6 - we're not - we're testing
>> > 2.6-as-it-appears-in-RHEL.
>> >
>> > This brings up a question - in what direction do we care/what's the
>> > point in the first place?
>> >
>> > Some points to ponder:
>> >
>> > - 2.6 is end of life - so the fact that this is coming up is silly, we
>> > should have stopped caring about it in OpenStack 2 years ago at least
>> > - Maybe we ACTUALLY only care about 2.6-on-RHEL - since that was the
>> > point of supporting it at all
>> > - Maybe we ACTUALLY care about 2.6 support across the board, in which
>> > case we should STOP testing using Centos6 which is not actually 2.6
>> >
>> > I vote for just amending our policy right now and killing 2.6 with
>> > prejudice.
>> >
>> > (also, I have heard a rumor that there are people running in to problems
>> > due to the fact that they are deploying onto a two-release-old version
>> > of Debian. No offense - but there is no way we're supporting that)
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] IPv6 bug fixes that would be nice to have in Juno

2014-10-03 Thread Armando M.
I have all of these bugs on my radar, and I want to fast track them
for merging in the next few days.

Please tag the bug reports with 'juno-rc-potential'.

For each of them we can discuss the loss of functionality they cause.
If no workaround can be found, we should definitely cut an RC2.

Armando

On 3 October 2014 12:21, Collins, Sean  wrote:
> On Fri, Oct 03, 2014 at 02:58:36PM EDT, Henry Gessau wrote:
>> There are some fixes for IPv6 bugs that unfortunately missed the RC1 cut.
>> These bugs are quite important for IPv6 users and therefore I would like to
>> lobby for getting them into a possible RC2 of Neutron Juno.
>
> Henry and I spoke about these bugs, and I agree with his assessment. +1!
> --
> Sean M. Collins
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] HA of dhcp agents?

2014-10-21 Thread Armando M.
As far as I can tell when you specify:

dhcp_agents_per_network = X > 1

The server binds the network to all the agents (up to X), which means that
you have multiple instances of dnsmasq serving dhcp requests at the same
time. If one agent dies, there is no fail-over needed per se, as the other
agent will continue to server dhcp requests unaffected.

For instance, in my env I have dhcp_agents_per_network=2, so If I create a
network, and list the agents serving the network I will see the following:

neutron dhcp-agent-list-hosting-net test

+--+++---+

| id   | host   | admin_state_up | alive |

+--+++---+

| 6dd09649-5e24-403b-9654-7aa0f69f04fb | host1  | True   | :-)   |

| 7d47721a-2725-45f8-b7c4-2731cfabdb48 | host2  | True   | :-)   |

+--+++---+

Isn't that what you're after?

Cheers,
Armando

On 21 October 2014 22:26, Noel Burton-Krahn  wrote:

> We currently have a mechanism for restarting the DHCP agent on another
> node, but we'd like the new agent to take over all the old networks of the
> failed dhcp instance.  Right now, since dhcp agents are distinguished by
> host, and the host has to match the host of the ovs agent, and the ovs
> agent's host has to be unique per node, the new dhcp agent is registered as
> a completely new agent and doesn't take over the failed agent's networks.
> I'm looking for a way to give the new agent the same roles as the previous
> one.
>
> --
> Noel
>
>
> On Tue, Oct 21, 2014 at 12:12 AM, Kevin Benton  wrote:
>
>> No, unfortunately when the DHCP agent dies there isn't automatic
>> rescheduling at the moment.
>>
>> On Mon, Oct 20, 2014 at 11:56 PM, Noel Burton-Krahn > > wrote:
>>
>>> Thanks for the pointer!
>>>
>>> I like how the first google hit for this is:
>>>
>>> Add details on dhcp_agents_per_network option for DHCP agent HA
>>> https://bugs.launchpad.net/openstack-manuals/+bug/1370934
>>>
>>> :) Seems reasonable to set dhcp_agents_per_network > 1.  What happens
>>> when a DHCP agent dies?  Does the scheduler automatically bind another
>>> agent to that network?
>>>
>>> Cheers,
>>> --
>>> Noel
>>>
>>>
>>>
>>> On Mon, Oct 20, 2014 at 9:03 PM, Jian Wen  wrote:
>>>
 See dhcp_agents_per_network in neutron.conf.

 https://bugs.launchpad.net/neutron/+bug/1174132

 2014-10-21 6:47 GMT+08:00 Noel Burton-Krahn :

> I've been working on failover for dhcp and L3 agents.  I see that in
> [1], multiple dhcp agents can host the same network.  However, it looks
> like I have to manually assign networks to multiple dhcp agents, which
> won't work.  Shouldn't multiple dhcp agents automatically fail over?
>
> [1]
> http://docs.openstack.org/trunk/config-reference/content/multi_agent_demo_configuration.html
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


 --
 Best,

 Jian

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Kevin Benton
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] metadata agent performance

2014-10-21 Thread Armando M.
It sounds like the only reasonable option we are left with right now is to
document.

Even if we enabled/removed the backport, it would take time until users can
get their hands on a new cut of the stable branch.

We would need to be more diligent in the future and limit backports to just
bug fixes to prevent situations like this from occurring, or maybe we need
to have better testing...um...definitely the latter :)

My 2c
Armando

On 22 October 2014 05:56, Maru Newby  wrote:

> We merged caching support for the metadata agent in juno, and backported
> to icehouse.  It was enabled by default in juno, but disabled by default in
> icehouse to satisfy the stable maint requirement of not changing functional
> behavior.
>
> While performance of the agent was improved with caching enabled, it
> regressed a reported 8x when caching was disabled [1].  This means that by
> default, the caching backport severely impacts icehouse Neutron's
> performance.
>
> So, what is the way forward?  We definitely need to document the problem
> for both icehouse and juno.  Is documentation enough?  Or can we enable
> caching by default in icehouse?  Or remove the backport entirely.
>
> There is also a proposal to replace the metadata agent’s use of the
> neutron client in favor of rpc [2].  There were comments on an old bug
> suggesting we didn’t want to do this [3], but assuming that we want this
> change in Kilo, is backporting even a possibility given that it implies a
> behavioral change to be useful?
>
> Thanks,
>
>
> Maru
>
>
>
> 1: https://bugs.launchpad.net/cloud-archive/+bug/1361357
> 2: https://review.openstack.org/#/c/121782
> 3: https://bugs.launchpad.net/neutron/+bug/1092043
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] HA of dhcp agents?

2014-10-22 Thread Armando M.
Hi Noel,

On 22 October 2014 01:57, Noel Burton-Krahn  wrote:

> Hi Armando,
>
> Sort of... but what happens when the second one dies?
>

You mean, you lost both (all) agents? In this case, yes you'd need to
resurrect the agents or move the networks to another available agent.


> If one DHCP agent dies, I need to be able to start a new DHCP agent on
> another host and take over from it.  As far as I can tell right now, when
> one DHCP agent dies, another doesn't take up the slack.
>

I am not sure I fully understand the failure mode you are trying to
address. The DHCP agents can work in an active-active configuration, so if
you have N agents assigned per network, all of them should be able to
address DHCP traffic. If this is not your experience, ie. one agent dies
and DHCP is no longer served on the network by any other agent, then there
might be some other problem going on.


>
>
> I have the same problem wit L3 agents by the way, that's next on my list
>
> --
> Noel
>
>
> On Tue, Oct 21, 2014 at 12:52 PM, Armando M.  wrote:
>
>> As far as I can tell when you specify:
>>
>> dhcp_agents_per_network = X > 1
>>
>> The server binds the network to all the agents (up to X), which means
>> that you have multiple instances of dnsmasq serving dhcp requests at the
>> same time. If one agent dies, there is no fail-over needed per se, as the
>> other agent will continue to server dhcp requests unaffected.
>>
>> For instance, in my env I have dhcp_agents_per_network=2, so If I create
>> a network, and list the agents serving the network I will see the following:
>>
>> neutron dhcp-agent-list-hosting-net test
>>
>> +--+++---+
>>
>> | id   | host   | admin_state_up | alive |
>>
>> +--+++---+
>>
>> | 6dd09649-5e24-403b-9654-7aa0f69f04fb | host1  | True   | :-)   |
>>
>> | 7d47721a-2725-45f8-b7c4-2731cfabdb48 | host2  | True   | :-)   |
>>
>> +--+++---+
>>
>> Isn't that what you're after?
>>
>> Cheers,
>> Armando
>>
>> On 21 October 2014 22:26, Noel Burton-Krahn  wrote:
>>
>>> We currently have a mechanism for restarting the DHCP agent on another
>>> node, but we'd like the new agent to take over all the old networks of the
>>> failed dhcp instance.  Right now, since dhcp agents are distinguished by
>>> host, and the host has to match the host of the ovs agent, and the ovs
>>> agent's host has to be unique per node, the new dhcp agent is registered as
>>> a completely new agent and doesn't take over the failed agent's networks.
>>> I'm looking for a way to give the new agent the same roles as the previous
>>> one.
>>>
>>> --
>>> Noel
>>>
>>>
>>> On Tue, Oct 21, 2014 at 12:12 AM, Kevin Benton 
>>> wrote:
>>>
>>>> No, unfortunately when the DHCP agent dies there isn't automatic
>>>> rescheduling at the moment.
>>>>
>>>> On Mon, Oct 20, 2014 at 11:56 PM, Noel Burton-Krahn <
>>>> n...@pistoncloud.com> wrote:
>>>>
>>>>> Thanks for the pointer!
>>>>>
>>>>> I like how the first google hit for this is:
>>>>>
>>>>> Add details on dhcp_agents_per_network option for DHCP agent HA
>>>>> https://bugs.launchpad.net/openstack-manuals/+bug/1370934
>>>>>
>>>>> :) Seems reasonable to set dhcp_agents_per_network > 1.  What happens
>>>>> when a DHCP agent dies?  Does the scheduler automatically bind another
>>>>> agent to that network?
>>>>>
>>>>> Cheers,
>>>>> --
>>>>> Noel
>>>>>
>>>>>
>>>>>
>>>>> On Mon, Oct 20, 2014 at 9:03 PM, Jian Wen  wrote:
>>>>>
>>>>>> See dhcp_agents_per_network in neutron.conf.
>>>>>>
>>>>>> https://bugs.launchpad.net/neutron/+bug/1174132
>>>>>>
>>>>>> 2014-10-21 6:47 GMT+08:00 Noel Burton-Krahn :
>>>>>>
>>>>>>> I've been working on failover for dhcp and L3 agents.  I see that in
>>>>>>> [1], multiple dhcp agents can host the same network.  However, it looks
>>>>>>> like

Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-28 Thread Armando M.
Sorry for jumping into this thread late...there's lots of details to
process, and I needed time to digest!

Having said that, I'd like to recap before moving the discussion forward,
at the Summit and beyond.

As it's being pointed out, there are a few efforts targeting this area; I
think that is sensible to adopt the latest spec system we have been using
to understand where we are, and I mean Gerrit and the spec submissions.

To this aim I see the following specs:

https://review.openstack.org/93613 - Service API for L2 bridging
tenants/provider networks
https://review.openstack.org/100278 - API Extension for l2-gateway
https://review.openstack.org/94612 - VLAN aware VMs
https://review.openstack.org/97714 - VLAN trunking networks for NFV

First of all: did I miss any? I am intentionally leaving out any vendor
specific blueprint for now.

When I look at these I clearly see that we jump all the way to
implementations details. From an architectural point of view, this clearly
does not make a lot of sense.

In order to ensure that everyone is on the same page, I would suggest to
have a discussion where we focus on the following aspects:

- Identify the use cases: what are, in simple terms, the possible
interactions that an actor (i.e. the tenant or the admin) can have with the
system (an OpenStack deployment), when these NFV-enabling capabilities are
available? What are the observed outcomes once these interactions have
taken place?

-  Management API: what abstractions do we expose to the tenant or admin
(do we augment the existing resources, or do we create new resources, or do
we do both)? This should obviously driven by a set of use cases, and we
need to identify the minimum set or logical artifacts that would let us
meet the needs of the widest set of use cases.

- Core Neutron changes: what needs to happen to the core of Neutron, if
anything, so that we can implement this NFV-enabling constructs
successfully? Are there any changes to the core L2 API? Are there any
changes required to the core framework (scheduling, policy, notifications,
data model etc)?

- Add support to the existing plugin backends: the openvswitch reference
implementation is an obvious candidate, but other plugins may want to
leverage the newly defined capabilities too. Once the above mentioned
points have been fleshed out, it should be fairly straightforward to have
these efforts progress in autonomy.

IMO, until we can get a full understanding of the aspects above, I don't
believe like the core team is in the best position to determine the best
approach forward; I think it's in everyone's interest in making sure that
something cohesive comes out of this; the worst possible outcome is no
progress at all, or even worse, some frankenstein system that no-one really
know what it does, or how it can be used.

I will go over the specs one more time in order to identify some answers to
my points above. I hope someone can help me through the process.


Many thanks,
Armando
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Clear all flows when ovs agent start? why and how avoid?

2014-10-29 Thread Armando M.
I must admit I haven't digged up too much, but this might also look
suspicious:

https://review.openstack.org/#/c/96782/

Perhaps it's a combination of both? :)

On 29 October 2014 08:17, Kyle Mestery  wrote:

> On Wed, Oct 29, 2014 at 7:25 AM, Hly  wrote:
> >
> >
> > Sent from my iPad
> >
> > On 2014-10-29, at 下午8:01, Robert van Leeuwen <
> robert.vanleeu...@spilgames.com> wrote:
> >
>  I find our current design is remove all flows then add flow by entry,
> this
>  will cause every network node will break off all tunnels between other
>  network node and all compute node.
> >>> Perhaps a way around this would be to add a flag on agent startup
> >>> which would have it skip reprogramming flows. This could be used for
> >>> the upgrade case.
> >>
> >> I hit the same issue last week and filed a bug here:
> >> https://bugs.launchpad.net/neutron/+bug/1383674
> >>
> >> From an operators perspective this is VERY annoying since you also
> cannot push any config changes that requires/triggers a restart of the
> agent.
> >> e.g. something simple like changing a log setting becomes a hassle.
> >> I would prefer the default behaviour to be to not clear the flows or at
> the least an config option to disable it.
> >>
> >
> > +1, we also suffered from this even when a very little patch is done
> >
> I'd really like to get some input from the tripleo folks, because they
> were the ones who filed the original bug here and were hit by the
> agent NOT reprogramming flows on agent restart. It does seem fairly
> obvious that adding an option around this would be a good way forward,
> however.
>
> Thanks,
> Kyle
>
> >>
> >> Cheers,
> >> Robert van Leeuwen
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Improving dhcp agent scheduling interface

2014-11-05 Thread Armando M.
Hi Eugene thanks for bringing this up for discussion. My comments inline.
Thanks,
Armando

On 5 November 2014 12:07, Eugene Nikanorov  wrote:

> Hi folks,
>
> I'd like to raise a discussion kept in irc and in gerrit recently:
> https://review.openstack.org/#/c/131944/
>
> The intention of the patch is to clean up particular scheduling
> method/interface:
> schedule_network.
>
> Let me clarify why I think it needs to be done (beside code api
> consistency reasons):
> Scheduling process is ultimately just a two steps:
> 1) choosing appropriate agent for the network
> 2) adding binding between the agent and the network
> To perform those two steps one doesn't need network object, network_id is
> satisfactory for this need.
>

I would argue that it isn't, actually.

You may need to know the state of the network to make that placement
decision. Just passing the id may cause the scheduling logic to issue an
extra DB query that can be easily avoided if the right interface between
the caller of a scheduler and the scheduler itself was in place. For
instance we cannot fix [1] (as you pointed out) today because the method
only accepts a dict that holds just a partial representation of the
network. If we had the entire DB object we would avoid that and just
passing the id is going in the opposite direction IMO


> However, there is a concern, that having full dict (or full network
> object) could allow us to do more flexible things in step 1 like deciding,
> whether network should be scheduled at all.
>

That's the whole point of scheduling, is it not? If you are arguing that we
should split the schedule method into two separate steps
(get_me_available_agent and bind_network_to_agent), and make the caller of
the schedule method carry out the two step process by itself, I think it
could be worth exploring that, but at this point I don't believe this is
the right refactoring.


> See the TODO for the reference:
>

[1]


>
> https://github.com/openstack/neutron/blob/master/neutron/scheduler/dhcp_agent_scheduler.py#L64
>
> However, this just puts an unnecessary (and actually, incorrect)
> requirement on the caller, to provide the network dict, mainly because
> caller doesn't know what content of the dict the callee (scheduler driver)
> expects.
>

Why is it incorrect? We should move away from dictionaries and passing
objects so that they can be reused where it makes sense without incurring
in the overhead of re-fetching the object associated to the uuid when
needed. We can even hide the complexity of refreshing the copy of the
object every time it is accessed if needed. With information hiding and
encapsulation we can wrap this logic in one place without scattering it
around everywhere in the code base, like it's done today.


> Currently scheduler is only interested in ID, if there is another
> scheduling driver,
>

No, the scheduler needs to know about the state of the network to do proper
placement. It's a side-effect of the default scheduling (i.e. random). If
we want to do more intelligent placement we need the state of the network.


> it may now require additional parameters (like list of full subnet dicts)
> in the dict which may or may not be provided by the calling code.
> Instead of making assumptions about what is in the dict, it's better to go
> with simpler and clearer interface that will allow scheduling driver to do
> whatever makes sense to it. In other words: caller provides id, driver
> fetches everything it
> needs using the id. For existing scheduling drivers it's a no-op.
>

Again, the problem lies with the fact that we're passing dictionaries
around.


>
> I think l3 scheduling is an example of interface done in the more right
> way; to me it looks clearer and more consistent.
>

I may argue that the l3 scheduling api is the bad example for the above
mentioned reasons.


>
> Thanks,
> Eugene.
>

At this point I am still not convinced by the arguments provided that the
patch 131944  should go forward
as it is.


>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][TripleO] Clear all flows when ovs agent start? why and how avoid?

2014-11-05 Thread Armando M.
I would be open to making this toggle switch available, however I feel that
doing it via static configuration can introduce unnecessary burden to the
operator. Perhaps we could explore a way where the agent can figure which
state it's supposed to be in based on its reported status?

Armando

On 5 November 2014 12:09, Salvatore Orlando  wrote:

> I have no opposition to that, and I will be happy to assist reviewing the
> code that will enable flow synchronisation  (or to say it in an easier way,
> punctual removal of flows unknown to the l2 agent).
>
> In the meanwhile, I hope you won't mind if we go ahead and start making
> flow reset optional - so that we stop causing downtime upon agent restart.
>
> Salvatore
>
> On 5 November 2014 11:57, Erik Moe  wrote:
>
>>
>>
>> Hi,
>>
>>
>>
>> I also agree, IMHO we need flow synchronization method so we can avoid
>> network downtime and stray flows.
>>
>>
>>
>> Regards,
>>
>> Erik
>>
>>
>>
>>
>>
>> *From:* Germy Lure [mailto:germy.l...@gmail.com]
>> *Sent:* den 5 november 2014 10:46
>> *To:* OpenStack Development Mailing List (not for usage questions)
>> *Subject:* Re: [openstack-dev] [neutron][TripleO] Clear all flows when
>> ovs agent start? why and how avoid?
>>
>>
>>
>> Hi Salvatore,
>>
>> A startup flag is really a simpler approach. But in what situation we
>> should set this flag to remove all flows? upgrade? restart manually?
>> internal fault?
>>
>>
>>
>> Indeed, only at the time that there are inconsistent(incorrect, unwanted,
>> stable and so on) flows between agent and the ovs related, we need refresh
>> flows. But the problem is how we know this? I think a startup flag is too
>> rough, unless we can tolerate the inconsistent situation.
>>
>>
>>
>> Of course, I believe that turn off startup reset flows action can resolve
>> most problem. The flows are correct most time after all. But considering
>> NFV 5 9s, I still recommend flow synchronization approach.
>>
>>
>>
>> BR,
>>
>> Germy
>>
>>
>>
>> On Wed, Nov 5, 2014 at 3:36 PM, Salvatore Orlando 
>> wrote:
>>
>> From what I gather from this thread and related bug report, the change
>> introduced in the OVS agent is causing a data plane outage upon agent
>> restart, which is not desirable in most cases.
>>
>>
>>
>> The rationale for the change that introduced this bug was, I believe,
>> cleaning up stale flows on the OVS agent, which also makes some sense.
>>
>>
>>
>> Unless I'm missing something, I reckon the best way forward is actually
>> quite straightforward; we might add a startup flag to reset all flows and
>> not reset them by default.
>>
>> While I agree the "flow synchronisation" process proposed in the previous
>> post is valuable too, I hope we might be able to fix this with a simpler
>> approach.
>>
>>
>>
>> Salvatore
>>
>>
>>
>> On 5 November 2014 04:43, Germy Lure  wrote:
>>
>> Hi,
>>
>>
>>
>> Consider the triggering of restart agent, I think it's nothing but:
>>
>> 1). only restart agent
>>
>> 2). reboot the host that agent deployed on
>>
>>
>>
>> When the agent started, the ovs may:
>>
>> a.have all correct flows
>>
>> b.have nothing at all
>>
>> c.have partly correct flows, the others may need to be reprogrammed,
>> deleted or added
>>
>>
>>
>> In any case, I think both user and developer would happy to see that the
>> system recovery ASAP after agent restarting. The best is agent only push
>> those incorrect flows, but keep the correct ones. This can ensure those
>> business with correct flows working during agent starting.
>>
>>
>>
>> So, I suggest two solutions:
>>
>> 1.Agent gets all flows from ovs and compare with its local flows after
>> restarting. And agent only corrects the different ones.
>>
>> 2.Adapt ovs and agent. Agent just push all(not remove) flows every time
>> and ovs prepares two tables for flows switch(like RCU lock).
>>
>>
>>
>> 1 is recommended because of the 3rd vendors.
>>
>>
>>
>> BR,
>>
>> Germy
>>
>>
>>
>>
>>
>> On Fri, Oct 31, 2014 at 10:28 PM, Ben Nemec 
>> wrote:
>>
>> On 10/29/2014 10:17 AM, Kyle Mestery wrote:
>> > On Wed, Oct 29, 2014 at 7:25 AM, Hly  wrote:
>> >>
>> >>
>> >> Sent from my iPad
>> >>
>> >> On 2014-10-29, at 下午8:01, Robert van Leeuwen <
>> robert.vanleeu...@spilgames.com> wrote:
>> >>
>> > I find our current design is remove all flows then add flow by
>> entry, this
>> > will cause every network node will break off all tunnels between
>> other
>> > network node and all compute node.
>>  Perhaps a way around this would be to add a flag on agent startup
>>  which would have it skip reprogramming flows. This could be used for
>>  the upgrade case.
>> >>>
>> >>> I hit the same issue last week and filed a bug here:
>> >>> https://bugs.launchpad.net/neutron/+bug/1383674
>> >>>
>> >>> From an operators perspective this is VERY annoying since you also
>> cannot push any config changes that requires/triggers a restart of the
>> agent.
>> >>> e.g. something simple like changing a log setting becomes a hassle.
>> >>> I would pref

[openstack-dev] Fw: [neutron] social event

2014-11-06 Thread Armando M.
I have just realized that I should have cross-reference this mail on both
ML's. Same message for the dev mailing list.

Thanks,
Armando

On 6 November 2014 00:32, Armando M.  wrote:

> Hi there,
>
> I know this may be somewhat short notice, but a few of us have wondered if
> we should continue the tradition of having a social gathering of Neutron
> folks to have a few drinks and talk about work in a slightly less boring
> setting.
>
> I was looking at:
>
> https://plus.google.com/+PlayOffWagramParis/about?hl=en
>
> It seems close enough to the conference venue, and spacious enough to hold
> a dozen of people or so. I would suggest we go over there right after the
> end of the summit session or thereabouts, say 6.30pm.
>
> RSVP
>
> Cheers,
> Armando
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] social event

2014-11-06 Thread Armando M.
Thanks for everyone who turned up!

It was nice seeing you there, it was last minute planning...but we manage
to squeeze in okay!

Cheers,
Armando

On 6 November 2014 17:16, Oleg Bondarev  wrote:

> Please count me in.
>
> Thanks,
> Oleg
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] dvr l3_snat

2014-11-07 Thread Armando M.
Not sure if you've seen this one too:

https://wiki.openstack.org/wiki/Neutron/DVR

Hope this helps!
Armando

On 7 November 2014 01:50, Li Tianqing  wrote:

> Oh, thanks, i finally find it.
> it's all here.
> https://blueprints.launchpad.net/neutron/+spec/neutron-ovs-dvr
>
> Thanks a lot.
>
> --
> Best
> Li Tianqing
>
> At 2014-11-06 20:47:39, "Henry"  wrote:
>
> Have you read previous posts? This topic had been discussed for a while.
>
> Sent from my iPad
>
> On 2014-11-6, at 下午6:18, "Li Tianqing"  wrote:
>
> Hello,
>why we put l3_snat on network node to handle North/South snat, and why
> don't we put it  on compute node?
>Does it possible to put l3_agent on all compute_node for North/South
> snat, dnat, and east/west l3 routing?
>
>
>
>
> --
> Best
> Li Tianqing
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Spec freeze exception] Rootwrap daemon ode support

2014-11-08 Thread Armando M.
Hi Miguel,

Thanks for picking this up. Pull me in and I'd be happy to help!

Cheers,
Armando

On 7 November 2014 10:05, Miguel Ángel Ajo  wrote:

>
> Hi Yorik,
>
>I was talking with Mark Mcclain a minute ago here at the summit about
> this. And he told me that now at the start of the cycle looks like a good
> moment to merge the spec & the root wrap daemon bits, so we have a lot of
> headroom for testing during the next months.
>
>We need to upgrade the spec [1] to the new Kilo format.
>
>Do you have some time to do it?, I can allocate some time and do it
> right away.
>
> [1] https://review.openstack.org/#/c/93889/
> --
> Miguel Ángel Ajo
> Sent with Sparrow 
>
> On Thursday, 24 de July de 2014 at 01:42, Miguel Angel Ajo Pelayo wrote:
>
> +1
>
> Sent from my Android phone using TouchDown (www.nitrodesk.com)
>
>
> -Original Message-
> From: Yuriy Taraday [yorik@gmail.com]
> Received: Thursday, 24 Jul 2014, 0:42
> To: OpenStack Development Mailing List [openstack-dev@lists.openstack.org]
>
> Subject: [openstack-dev] [Neutron][Spec freeze exception] Rootwrap daemon
>mode support
>
>
> Hello.
>
> I'd like to propose making a spec freeze exception for
> rootwrap-daemon-mode spec [1].
>
> Its goal is to save agents' execution time by using daemon mode for
> rootwrap and thus avoiding python interpreter startup time as well as sudo
> overhead for each call. Preliminary benchmark shows 10x+ speedup of the
> rootwrap interaction itself.
>
> This spec have a number of supporters from Neutron team (Carl and Miguel
> gave it their +2 and +1) and have all code waiting for review [2], [3], [4].
> The only thing that has been blocking its progress is Mark's -2 left when
> oslo.rootwrap spec hasn't been merged yet. Now that's not the case and code
> in oslo.rootwrap is steadily getting approved [5].
>
> [1] https://review.openstack.org/93889
> [2] https://review.openstack.org/82787
> [3] https://review.openstack.org/84667
> [4] https://review.openstack.org/107386
> [5]
> https://review.openstack.org/#/q/project:openstack/oslo.rootwrap+topic:bp/rootwrap-daemon-mode,n,z
>
> --
>
> Kind regards, Yuriy.
>  ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] About VMWare network bp

2014-11-13 Thread Armando M.
Hi there,

My answers inline. CC the dev list too.

On 13 November 2014 01:24, Gary Kotton  wrote:

>  Hi,
> At the moment the BP is blocked by the design on splitting  out the vendor
> plugins.
>
We have implemented the NSXv plugin based on stable/icehouse and plan to
> start to push this upstream soon. So at the moment I think that we are all
> blocked. The NSXv plugin is a holistic one. The IBM and HP are drivers that
> hook into the ML2. I am not sure if these will reside in the same or
> different projects.
>

I don't think this statement is entirely accurate, please let's not spread
FUD; it is true that splitting out of the vendor plugins has been proposed
at the summit, but nothing has actually been finalized yet. As a matter of
fact, the proposal will be going through the same review process as any
other community effort in the form of a blueprint specification.

The likely outcome of that can be:

- the proposal gets momentum and it gets ultimately approved
- the proposal does not get any traction and it's ultimately deferred
- the proposal gets attention, but it's shot down for lack of agreement

Regardless of the outcome, we can always find a place for the code being
contributed, therefore I would suggest to proceed and make progress on any
pending effort you may have, on the blueprint spec, the actual code, and
the 3rd party CI infrastructure.

If you have made progress on all of three, that's even better!

Hope this help
Armando



>   From: Feng Xi BJ Yan 
> Date: Thursday, November 13, 2014 at 9:46 AM
> To: Gary Kotton , "arma...@gmail.com" <
> arma...@gmail.com>
> Cc: Zhu ZZ Zhu , "d...@us.ibm.com" 
> Subject: About VMWare network bp
>
>   Hi, Gary and Armando,
> Long time no see.
> Our work on VMWare network bp was blocked for a long time. Shall we go on?
> Please let me know if you guys have any plans on this. Maybe we could
> resume our weekly talk firstly.
>
> Best Regard:)
> Bruce Yan
>
> Yan, Fengxi (闫凤喜)
> Openstack Platform Team
> IBM China Systems & Technology Lab, Beijing
> E-Mail: yanfen...@cn.ibm.com
> Tel: 86-10-82451418  Notes: Feng Xi FX Yan/China/IBM
> Address: 3BW239, Ring Building. No.28 Building, ZhongGuanCun Software
> Park,No.8
> DongBeiWang West Road, ShangDi, Haidian District, Beijing, P.R.China
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] VMware networking support

2014-11-13 Thread Armando M.
I chimed in on another thread, but I am reinstating my point just in case.

On 13 November 2014 04:38, Gary Kotton  wrote:

>  Hi,
> A few months back we started to work on a umbrella spec for Vmware
> networking support (https://review.openstack.org/#/c/105369). There are a
> number of different proposals for a number of different use cases. In
> addition to providing one another with an update of our progress we need to
> discuss the following challenges:
>
>- At the summit there was talk about splitting out vendor code from
>the neutron code base. The aforementioned specs are not being approved
>until we have decided what we as a community want/need. We need to
>understand how we can continue our efforts and not be blocked or hindered
>by this debate.
>
> The proposal of allowing vendor plugin to be in full control of their own
destiny will be submitted as any other blueprint and will be discussed as
any other community effort. In my opinion, there is no need to be blocked
on waiting whether the proposal go anywhere. Spec, code and CI being
submitted will have minimal impact irrespective of any decision reached.

So my suggestion is to keep your code current with trunk, and do your 3rd
Party CI infrastructure homework, so that when we are ready to push the
trigger there will be no further delay.

>
>- CI updates – in order to provide a new plugin we are required to
>provide CI (yes, this is written in stone and in some cases marble)
>- Additional support may be required in the following:
>   - Nova – for example Neutron may be exposing extensions or
>   functionality that requires Nova integrations
>   - Devstack – In order to get CI up and running we need devatck
>   support
>
> As a step forwards I would like to suggest that we meeting at
> #openstack-vmware channel on Tuesday at 15:00 UTC. Is this ok with everyone?
> Thanks
> Gary
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] L2 gateway as a service

2014-11-14 Thread Armando M.
Last Friday I recall we had two discussions around this topic. One in the
morning, which I think led to Maruti to push [1]. The way I understood [1]
was that it is an attempt at unifying [2] and [3], by choosing the API
approach of one and the architectural approach of the other.

[1] https://review.openstack.org/#/c/134179/
[2] https://review.openstack.org/#/c/100278/
[3] https://review.openstack.org/#/c/93613/

Then there was another discussion in the afternoon, but I am not 100% of
the outcome.

All this churn makes me believe that we probably just need to stop
pretending we can achieve any sort of consensus on the approach and let the
different alternatives develop independently, assumed they can all develop
independently, and then let natural evolution take its course :)

Ultimately the biggest debate is on what the API model needs to be for
these abstractions. We can judge on which one is the best API of all, but
sometimes this ends up being a religious fight. A good API for me might not
be a good API for you, even though I strongly believe that a good API is
one that can:

- be hard to use incorrectly
- clear to understand
- does one thing, and one thing well

So far I have been unable to be convinced why we'd need to cram more than
one abstraction in one single API, as it does violate the above mentioned
principles. Ultimately I like the L2 GW API proposed by 1 and 2 because
it's in line with those principles. I'd rather start from there and iterate.

My 2c,
Armando

On 14 November 2014 08:47, Salvatore Orlando  wrote:

> Thanks guys.
>
> I think you've answered my initial question. Probably not in the way I was
> hoping it to be answered, but it's ok.
>
> So now we have potentially 4 different blueprint describing more or less
> overlapping use cases that we need to reconcile into one?
> If the above is correct, then I suggest we go back to the use case and
> make an effort to abstract a bit from thinking about how those use cases
> should be implemented.
>
> Salvatore
>
> On 14 November 2014 15:42, Igor Cardoso  wrote:
>
>> Hello all,
>> Also, what about Kevin's https://review.openstack.org/#/c/87825/? One of
>> its use cases is exactly the L2 gateway. These proposals could probably be
>> inserted in a more generic work for moving existing datacenter L2 resources
>> to Neutron.
>> Cheers,
>>
>> On 14 November 2014 15:28, Mathieu Rohon  wrote:
>>
>>> Hi,
>>>
>>> As far as I understood last friday afternoon dicussions during the
>>> design summit, this use case is in the scope of another umbrella spec
>>> which would define external connectivity for neutron networks. Details
>>> of those connectivity would be defined through service plugin API.
>>>
>>> Ian do you plan to define such an umbrella spec? or at least, could
>>> you sum up the agreement of the design summit discussion in the ML?
>>>
>>> I see at least 3 specs which would be under such an umbrella spec :
>>> https://review.openstack.org/#/c/93329/ (BGPVPN)
>>> https://review.openstack.org/#/c/101043/ (Inter DC connectivity with
>>> VPN)
>>> https://review.openstack.org/#/c/134179/ (l2 gw aas)
>>>
>>>
>>> On Fri, Nov 14, 2014 at 1:13 PM, Salvatore Orlando 
>>> wrote:
>>> > Thanks Maruti,
>>> >
>>> > I have some comments and questions which I've posted on gerrit.
>>> > There are two things I would like to discuss on the mailing list
>>> concerning
>>> > this effort.
>>> >
>>> > 1) Is this spec replacing  https://review.openstack.org/#/c/100278 and
>>> > https://review.openstack.org/#/c/93613 - I hope so, otherwise this
>>> just adds
>>> > even more complexity.
>>> >
>>> > 2) It sounds like you should be able to implement this service plugin
>>> in
>>> > either a feature branch or a repository distinct from neutron. Can you
>>> > confirm that?
>>> >
>>> > Salvatore
>>> >
>>> > On 13 November 2014 13:26, Kamat, Maruti Haridas 
>>> > wrote:
>>> >>
>>> >> Hi Friends,
>>> >>
>>> >>  As discussed during the summit, I have uploaded the spec for
>>> review
>>> >> at https://review.openstack.org/#/c/134179/
>>> >>
>>> >> Thanks,
>>> >> Maruti
>>> >>
>>> >>
>>> >>
>>> >>
>>> >> ___
>>> >> OpenStack-dev mailing list
>>> >> OpenStack-dev@lists.openstack.org
>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >>
>>> >
>>> >
>>> > ___
>>> > OpenStack-dev mailing list
>>> > OpenStack-dev@lists.openstack.org
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> --
>> Igor Duarte Cardoso.
>> http://igordcard.com
>> @igordcard 
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bi

Re: [openstack-dev] [Neutron] LeastNetwork scheduling for DHCP

2014-11-14 Thread Armando M.
Benjamin,

Feel free to reach out. If you are referring to my -2, that was just
provisional.

Before we can go ahead and see an improved scheduling capability for DHCP,
you guys need to resolve the conflict between the overlapping blueprints,
working together or giving up one in favor on the other.

Cheers,
Armando

On 14 November 2014 07:28, GRASSART Benjamin <
benjamin.grass...@thalesgroup.com> wrote:

> Hi all,
>
>
>
> I would definitely be glad to work on the subject as well.
>
> However I am not sure to understand fully Armando last remark in our
> change.
>
>
>
> I will try to discuss it with him on IRC.
>
>
>
> Regards,
>
>
>
> Benjamin GRASSART
>
>
>
> [@@ THALES GROUP INTERNAL @@]
>
>
>
> *De :* S M, Praveen Kumar [mailto:praveen-sm.ku...@hp.com]
> *Envoyé :* vendredi 7 novembre 2014 09:27
> *À :* Narasimhan, Vivekanandan; OpenStack Development Mailing List (not
> for usage questions)
> *Cc :* Beltur, Jayashree; GRASSART Benjamin; Sourabh Patwardhan
> (sopatwar); M, Shiva Kumar; A, Keshava
> *Objet :* RE: [Neutron] LeastNetwork scheduling for DHCP
>
>
>
> Hi Vivek,
>
>
>
> We are definitely interested in working on these blueprints
> collaboratively.
>
>
>
> We have a working implementation for our blueprint and received few
> important comments from Armando and addressing them currently.
>
>
>
>
>
>
>
> Regards
>
> Praveen.
>
>
>
>
>
> *From:* Narasimhan, Vivekanandan
> *Sent:* Thursday, November 06, 2014 9:09 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Cc:* Beltur, Jayashree; S M, Praveen Kumar;
> benjamin.grass...@thalesgroup.com; Sourabh Patwardhan (sopatwar)
> *Subject:* [Neutron] LeastNetwork scheduling for DHCP
>
>
>
> Hi Neutron Stackers,
>
>
>
> There is an interest among vendors to bring Least Networks scheduling for
> DHCP into Openstack Neutron.
>
>
>
> Currently there are the following blueprints lying there, all of them
> trying to address this issue:
>
> https://review.openstack.org/111210
>
> https://review.openstack.org/#/c/130912/
>
> https://review.openstack.org/104587
>
>
>
> We are trying  to pull together all these BPs as one Umbrella BP, on which
> we
>
> can pour volunteers from every side, to clear out this BP itself as
> initial step.
>
>
>
> So we would like to collaborate, to plan BP approval for these.
>
>
>
> Please respond if you are interested.
>
>
>
> --
>
> Thanks,
>
>
>
> Vivek
>
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Core/Vendor code decomposition

2014-11-14 Thread Armando M.
Hello,

As follow-up action after the Design Summit Session on Core/Vendor split,
please find the proposal outlined here:

https://review.openstack.org/#/c/134680/

I know that Anita will tell me off since I asked for reviews on the ML, but
I felt that it was important to raise awareness, even more than necessary :)

I also want to stress the fact that this proposal was not going to be
possible without the help of everyone we talked to over the last few weeks,
and gave us constructive feedback.

Finally, a special thanks goes to Maru Newby and Kevin Benton who helped
with most parts of the proposal.

Let the review tango begin!

Cheers,
Armando
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] L2 gateway as a service

2014-11-17 Thread Armando M.
On 17 November 2014 01:13, Mathieu Rohon  wrote:

> Hi
>
> On Fri, Nov 14, 2014 at 6:26 PM, Armando M.  wrote:
> > Last Friday I recall we had two discussions around this topic. One in the
> > morning, which I think led to Maruti to push [1]. The way I understood
> [1]
> > was that it is an attempt at unifying [2] and [3], by choosing the API
> > approach of one and the architectural approach of the other.
> >
> > [1] https://review.openstack.org/#/c/134179/
> > [2] https://review.openstack.org/#/c/100278/
> > [3] https://review.openstack.org/#/c/93613/
> >
> > Then there was another discussion in the afternoon, but I am not 100% of
> the
> > outcome.
>
> Me neither, that's why I'd like ian, who led this discussion, to sum
> up the outcome from its point of view.
>
> > All this churn makes me believe that we probably just need to stop
> > pretending we can achieve any sort of consensus on the approach and let
> the
> > different alternatives develop independently, assumed they can all
> develop
> > independently, and then let natural evolution take its course :)
>
> I tend to agree, but I think that one of the reason why we are looking
> for a consensus, is because API evolutions proposed through
> Neutron-spec are rejected by core-dev, because they rely on external
> components (sdn controller, proprietary hardware...) or they are not a
> high priority for neutron core-dev.
>

I am not sure I agree with this statement. I am not aware of any proposal
here being dependent on external components as you suggested, but even if
it were, an API can be implemented in multiple ways, just like the (core)
Neutron API can be implemented using a fully open source solution or an
external party like an SDN controller.


> By finding a consensus, we show that several players are interested in
> such an API, and it helps to convince core-dev that this use-case, and
> its API, is missing in neutron.
>

Right, but it seems we are struggling to find this consensus. In this
particular instance, where we are trying to address the use case of L2
Gateway (i.e. allow Neutron logical networks to be extended with physical
ones), it seems that everyone has a different opinion as to what
abstraction we should adopt in order to express and configure the L2
gateway entity, and at the same time I see no convergence in sight.

Now if the specific L2 Gateway case were to be considered part of the core
Neutron API, then such a consensus would be mandatory IMO, but if it isn't,
is there any value in striving for that consensus at all costs? Perhaps
not, and we can have multiple attempts experiment and innovate
independently.

So far, all my data points seem to imply that such an abstraction need not
be part of the core API.


> Now, if there is room for easily propose new API in Neutron, It make
> sense to leave new API appear and evolve, and then " let natural
> evolution take its course ", as you said.
> To me, this is in the scope of the "advanced services" project.
>

Advanced Services may be a misnomer, but an incubation feature, sure why
not?


>
> > Ultimately the biggest debate is on what the API model needs to be for
> these
> > abstractions. We can judge on which one is the best API of all, but
> > sometimes this ends up being a religious fight. A good API for me might
> not
> > be a good API for you, even though I strongly believe that a good API is
> one
> > that can:
> >
> > - be hard to use incorrectly
> > - clear to understand
> > - does one thing, and one thing well
> >
> > So far I have been unable to be convinced why we'd need to cram more than
> > one abstraction in one single API, as it does violate the above mentioned
> > principles. Ultimately I like the L2 GW API proposed by 1 and 2 because
> it's
> > in line with those principles. I'd rather start from there and iterate.
> >
> > My 2c,
> > Armando
> >
> > On 14 November 2014 08:47, Salvatore Orlando 
> wrote:
> >>
> >> Thanks guys.
> >>
> >> I think you've answered my initial question. Probably not in the way I
> was
> >> hoping it to be answered, but it's ok.
> >>
> >> So now we have potentially 4 different blueprint describing more or less
> >> overlapping use cases that we need to reconcile into one?
> >> If the above is correct, then I suggest we go back to the use case and
> >> make an effort to abstract a bit from thinking about how those use cases
> >> should be implemented.
> >>
> >> Salvatore
> >>
> >> On 14 November 2014 15:42, Igor Cardoso  wrote:
> >>&g

Re: [openstack-dev] [Neutron] VMware networking

2014-06-30 Thread Armando M.
Hi Gary,

Thanks for sending this out, comments inline.

On 29 June 2014 00:15, Gary Kotton  wrote:

>  Hi,
>  At the moment there are a number of different BP’s that are proposed to
> enable different VMware network management solutions. The following specs
> are in review:
>
>1. VMware NSX-vSphere plugin: https://review.openstack.org/102720
>2. Neutron mechanism driver for VMWare vCenter DVS network creation:
>https://review.openstack.org/#/c/101124/
>3. VMware dvSwitch/vSphere API support for Neutron ML2:
>https://review.openstack.org/#/c/100810/
>
> In addition to this there is also talk about HP proposing some for
> of VMware network management.
>

I believe this is blueprint [1]. This was proposed a while ago, but now it
needs to go through the new BP review process.

[1] - https://blueprints.launchpad.net/neutron/+spec/ovsvapp-esxi-vxlan


>  Each of the above has specific use case and will enable existing vSphere
> users to adopt and make use of Neutron.
>
>  Items #2 and #3 offer a use case where the user is able to leverage and
> manage VMware DVS networks. This support will have the following
> limitations:
>
>- Only VLANs are supported (there is no VXLAN support)
>- No security groups
>- #3 – the spec indicates that it will make use of pyvmomi (
>https://github.com/vmware/pyvmomi). There are a number of disclaimers
>here:
>   - This is currently blocked regarding the integration into the
>   requirements project (https://review.openstack.org/#/c/69964/)
>   - The idea was to have oslo.vmware leverage this in the future (
>   https://github.com/openstack/oslo.vmware)
>
> Item #1 will offer support for all of the existing Neutron API’s and there
> functionality. This solution will require a additional component called NSX
> (https://www.vmware.com/support/pubs/nsx_pubs.html).
>
>
It's great to see this breakdown, it's very useful in order to identify the
potential gaps and overlaps amongst the various efforts around ESX and
Neutron. This will also ensure a path towards a coherent code contribution.

 It would be great if we could all align our efforts and have some clear
> development items for the community. In order to do this I’d like suggest
> that we meet to sync and discuss all efforts. Please let me know if the
> following sounds ok for an initial meeting to discuss how we can move
> forwards:
>  - Tuesday 15:00 UTC
>  - IRC channel #openstack-vmware
>

I am available to join.


>
>  We can discuss the following:
>
>1. Different proposals
>2. Combining efforts
>3. Setting a formal time for meetings and follow ups
>
> Looking forwards to working on this stuff with the community and providing
> a gateway to using Neutron and further enabling the adaption of OpenStack.
>

I think code contribution is only one aspect of this story; my other
concern is that from a usability standpoint we would need to provide a
clear framework for users to understand what these solutions can do for
them and which one to choose.

Going forward I think it would be useful if we produced an overarching
blueprint that outlines all the ESX options being proposed for OpenStack
Networking (and the existing ones, like NSX - formerly known as NVP, or
nova-network), their benefits and drawbacks, their technical dependencies,
system requirements, API supported etc. so that a user can make an informed
decision when looking at ESX deployments in OpenStack.


>
>  Thanks
> Gary
>
>
Cheers,
Armando


> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] DVR demo and how-to

2014-06-30 Thread Armando M.
Hi folks,

The DVR team is working really hard to complete this important task for
Juno and Neutron.

In order to help see this feature in action, a video has been made
available and link can be found in [2].

There is still some work to do, however I wanted to remind you that all of
the relevant information is available on the wiki [1, 2] and Gerrit [3].

[1] - https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam
[2] - https://wiki.openstack.org/wiki/Neutron/DVR/HowTo
[3] - https://review.openstack.org/#/q/topic:bp/neutron-ovs-dvr,n,z

More to follow!

Cheers,
Armando
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] VMware networking

2014-07-14 Thread Armando M.
Sounds good to me.


On 14 July 2014 07:13, Gary Kotton  wrote:

> Hi,
> I am sorry but I had to attend a meeting now. Can we please postpone this
> to tomorrow?
> Thanks
> Gary
>
> On 7/8/14, 11:19 AM, "Gary Kotton"  wrote:
>
> >Hi,
> >
> >Just an update and a progress report:
> >
> >1. Armando has created an umbrella BP -
> >
> >
> https://review.openstack.org/#/q/status:open+project:openstack/neutron-spe
> >c
> >
> >s+branch:master+topic:bp/esx-neutron,n,z
> >
> >2. Whoever is proposing the BP’s can you please fill in the table -
> >
> >
> https://urldefense.proofpoint.com/v1/url?u=https://docs.google.com/documen
> >t/d/1vkfJLZjIetPmGQ6GMJydDh8SSWz60iUhuuKhYMJ&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D
> >%3D%0A&r=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0A&m=SvPJghzudWc
> >d764hV5HdpNELoKWhcqrGB2hyww4WB90%3D%0A&s=74fad114ce48f985c58e1b4e1bdc7efa2
> >ed2376034e7ebd8cb82f0829915cf01
> >
> >qoz8/edit?usp=sharing
> >
> >Lets meet again next week Monday at the same time and same place and plan
> >
> >future steps. How does that sound?
> >
> >Thanks
> >
> >Gary
> >
> >
> >
> >On 7/2/14, 2:27 PM, "Gary Kotton"  wrote:
> >
> >
> >
> >>Hi,
> >
> >>Sadly last night night we did not have enough people to make any
> >>progress.
> >
> >>Lets try again next week Monday at 14:00 UTC. The meeting will take place
> >
> >>on #openstack-vmware channel
> >
> >>Alut a continua
> >
> >>Gary
> >
> >>
> >
> >>On 6/30/14, 6:38 PM, "Kyle Mestery"  wrote:
> >
> >>
> >
> >>>On Mon, Jun 30, 2014 at 10:18 AM, Armando M.  wrote:
> >
> >>>> Hi Gary,
> >
> >>>>
> >
> >>>> Thanks for sending this out, comments inline.
> >
> >>>>
> >
> >>>Indeed, thanks Gary!
> >
> >>>
> >
> >>>> On 29 June 2014 00:15, Gary Kotton  wrote:
> >
> >>>>>
> >
> >>>>> Hi,
> >
> >>>>> At the moment there are a number of different BP¹s that are proposed
> >
> >>>>>to
> >
> >>>>> enable different VMware network management solutions. The following
> >
> >>>>>specs
> >
> >>>>> are in review:
> >
> >>>>>
> >
> >>>>> VMware NSX-vSphere plugin: https://review.openstack.org/102720
> >
> >>>>> Neutron mechanism driver for VMWare vCenter DVS network
> >
> >>>>> creation:https://review.openstack.org/#/c/101124/
> >
> >>>>> VMware dvSwitch/vSphere API support for Neutron ML2:
> >
> >>>>> https://review.openstack.org/#/c/100810/
> >
> >>>>>
> >
> >>>I've commented in these reviews about combining efforts here, I'm glad
> >
> >>>you're taking the lead to make this happen Gary. This is much
> >
> >>>appreciated!
> >
> >>>
> >
> >>>>> In addition to this there is also talk about HP proposing some for of
> >
> >>>>> VMware network management.
> >
> >>>>
> >
> >>>>
> >
> >>>> I believe this is blueprint [1]. This was proposed a while ago, but
> >>>>now
> >
> >>>>it
> >
> >>>> needs to go through the new BP review process.
> >
> >>>>
> >
> >>>> [1] -
> >
> >>>>
> https://urldefense.proofpoint.com/v1/url?u=https://blueprints.launchpad
> >>>>.
> >
> >>>>n
> >
> >>>>et/neutron/%2Bspec/ovsvapp-esxi-vxlan&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%
> >>>>0
> >
> >>>>A
> >
> >>>>&r=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0A&m=MX5q1Rh4UyhnoZ
> >>>>u
> >
> >>>>1
> >
> >>>>a8dOes8mbE9NM9gvjG2PnJXhUU0%3D%0A&s=622a539e40b3b950c25f0b6cabf05bc81bb
> >>>>6
> >
> >>>>1
> >
> >>>>159077c00f12d7882680e84a18b
> >
> >>>>
> >
> >>>>>
> >
> >>>>> Each of the above has specific use case and will enable existing
> >
> >>>>>vSphere
> >
> >

Re: [openstack-dev] [Neutron] [Spec freeze exception] VMware DVS support

2014-07-21 Thread Armando M.
I think the specs under the umbrella one can be approved/treated
individually.

The umbrella one is an informational blueprint, there is not going to be
code associated with it, however before approving it (and the individual
ones) we'd need all the parties interested in vsphere support for Neutron
to reach an agreement as to what the code will look like so that the
individual contributions being proposed are not going to clash with each
other or create needless duplication.




On 21 July 2014 06:11, Kyle Mestery  wrote:

> On Sun, Jul 20, 2014 at 4:21 AM, Gary Kotton  wrote:
> > Hi,
> > I would like to propose the following for spec freeze exception:
> >
> > https://review.openstack.org/#/c/105369
> >
> > This is an umbrella spec for a number of VMware DVS support specs. Each
> has
> > its own unique use case and will enable a lot of existing VMware DVS
> users
> > to start to use OpenStack.
> >
> > For https://review.openstack.org/#/c/102720/ we have the following
> which we
> > can post when the internal CI for the NSX-v is ready (we are currently
> > working on this):
> >  - core plugin functionality
> >  - layer 3 support
> >  - security group support
> >
> Do we need to approve all the "under the umbrella" specs as well?
>
> > Thanks
> > Gary
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [Spec freeze exception] VMware DVS support

2014-07-21 Thread Armando M.
That would be my thinking as well, but if we managed to make an impressive
progress from now until the Feature Freeze proposal deadline, I'd be
willing to reevaluate the situation.

A.


On 21 July 2014 12:13, Kyle Mestery  wrote:

> On Mon, Jul 21, 2014 at 2:03 PM, Armando M.  wrote:
> > I think the specs under the umbrella one can be approved/treated
> > individually.
> >
> > The umbrella one is an informational blueprint, there is not going to be
> > code associated with it, however before approving it (and the individual
> > ones) we'd need all the parties interested in vsphere support for
> Neutron to
> > reach an agreement as to what the code will look like so that the
> individual
> > contributions being proposed are not going to clash with each other or
> > create needless duplication.
> >
> That's what I was thinking as well. So, given where we're at in Juno,
> I'm leaning towards having all of this consensus building happen now
> and we can start the Kilo cycle with these BPs in agreement from all
> contributors.
>
> Does that sound ok?
>
> Thanks,
> Kyle
>
> >
> >
> >
> > On 21 July 2014 06:11, Kyle Mestery  wrote:
> >>
> >> On Sun, Jul 20, 2014 at 4:21 AM, Gary Kotton 
> wrote:
> >> > Hi,
> >> > I would like to propose the following for spec freeze exception:
> >> >
> >> > https://review.openstack.org/#/c/105369
> >> >
> >> > This is an umbrella spec for a number of VMware DVS support specs.
> Each
> >> > has
> >> > its own unique use case and will enable a lot of existing VMware DVS
> >> > users
> >> > to start to use OpenStack.
> >> >
> >> > For https://review.openstack.org/#/c/102720/ we have the following
> which
> >> > we
> >> > can post when the internal CI for the NSX-v is ready (we are currently
> >> > working on this):
> >> >  - core plugin functionality
> >> >  - layer 3 support
> >> >  - security group support
> >> >
> >> Do we need to approve all the "under the umbrella" specs as well?
> >>
> >> > Thanks
> >> > Gary
> >> >
> >> > ___
> >> > OpenStack-dev mailing list
> >> > OpenStack-dev@lists.openstack.org
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Spec exceptions are closed, FPF is August 21

2014-07-31 Thread Armando M.
It is not my intention debating, pointing fingers and finding culprits,
these issues can be addressed in some other context.

I am gonna say three things:

1) If a core-reviewer puts a -2, there must be a good reason for it. If
other reviewers blindly move on as some people seem to imply here, then
those reviewers should probably not review the code at all! My policy is to
review all the code I am interested in/I can, regardless of the score. My
-1 may be someone's +1 (or vice versa), so 'trusting' someone else's vote
is the wrong way to go about this.

2) If we all feel that this feature is important (which I am not sure it
was being marked as 'low' in oslo, not sure how it was tracked in Neutron),
there is the weekly IRC Neutron meeting to raise awareness, since all cores
participate; to the best of my knowledge we never spoke (or barely) of the
rootwrap work.

3) If people do want this work in Juno (Carl being one of them), we can
figure out how to make one final push, and assess potential regression. We
'rushed' other features late in cycle in the past (like nova/neutron event
notifications) and if we keep this disabled by default in Juno, I don't
think it's really that risky. I can work with Carl to give the patches some
more love.

Armando



On 31 July 2014 15:40, Rudra Rugge  wrote:

> Hi Kyle,
>
> I also agree with Mandeep's suggestion of putting a time frame on the
> lingering "-2" if the addressed concerns have been taken care of. In my
> experience also a sticky -2 detracts other reviewers from reviewing an
> updated patch.
>
> Either a time-frame or a possible override by PTL (move to -1) would help
> make progress on the review.
>
> Regards,
> Rudra
>
>
> On Thu, Jul 31, 2014 at 2:29 PM, Mandeep Dhami 
> wrote:
>
>> Hi Kyle:
>>
>> As -2 is sticky, and as there exists a possibility that the original core
>> might not get time to get back to re-reviewing his, do you think that there
>> should be clearer guidelines on it's usage (to avoid what you identified as
>> "dropping of the balls")?
>>
>> Salvatore had a good guidance in a related thread [0], do you agree with
>> something like that?
>>
>>
>> I try to avoid -2s as much as possible. I put a -2 only when I reckon your
>> patch should never be merged because it'll make the software unstable or
>> tries to solve a problem that does not exist. -2s stick across patches and
>> tend to put off other reviewers.
>>
>> [0]
>> http://lists.openstack.org/pipermail/openstack-dev/2014-July/041339.html
>>
>>
>> Or do you think that 3-5 days after an update that addresses the issues
>> identified in the original -2, we should automatically remove that -2? If
>> this does not happen often, this process does not have to be automated,
>> just an "exception" that the PTL can exercise to address issues where the
>> original reason for -2 has been addressed and nothing new has been
>> identified?
>>
>>
>>
>> On Thu, Jul 31, 2014 at 11:25 AM, Kyle Mestery 
>> wrote:
>>
>>> On Thu, Jul 31, 2014 at 7:11 AM, Yuriy Taraday 
>>> wrote:
>>> > On Wed, Jul 30, 2014 at 11:52 AM, Kyle Mestery 
>>> wrote:
>>> >> and even less
>>> >> possibly rootwrap [3] if the security implications can be worked out.
>>> >
>>> > Can you please provide some input on those security implications that
>>> are
>>> > not worked out yet?
>>> > I'm really surprised to see such comments in some ML thread not
>>> directly
>>> > related to the BP. Why is my spec blocked? Neither spec [1] nor code
>>> (which
>>> > is available for a really long time now [2] [3]) can get enough
>>> reviewers'
>>> > attention because of those groundless -2's. Should I abandon these
>>> change
>>> > requests and file new ones to get some eyes on my code and proposals?
>>> It's
>>> > just getting ridiculous. Let's take a look at timeline, shall we?
>>> >
>>> I share your concerns here as well, and I'm sorry you've had a bad
>>> experience working with the community here.
>>>
>>> > Mar, 25 - first version of the first part of Neutron code is published
>>> at
>>> > [2]
>>> > Mar, 28 - first reviewers come and it gets -1'd by Mark because of
>>> lack of
>>> > BP (thankful it wasn't -2 yet, so reviews continued)
>>> > Apr, 1 - Both Oslo [5] and Neturon [6] BPs are created;
>>> > Apr, 2 - first version of the second part of Neutron code is published
>>> at
>>> > [3];
>>> > May, 16 - first version of Neutron spec is published at [1];
>>> > May, 19 - Neutron spec gets frozen by Mark's -2 (because Oslo BP is not
>>> > approved yet);
>>> > May, 21 - first part of Neutron code [2] is found generally OK by
>>> reviewers;
>>> > May, 21 - first version of Oslo spec is published at [4];
>>> > May, 29 - a version of the second part of Neutron code [3] is
>>> published that
>>> > later raises only minor comments by reviewers;
>>> > Jun, 5 - both parts of Neutron code [2] [3] get frozen by -2 from Mark
>>> > because BP isn't approved yet;
>>> > Jun, 23 - Oslo spec [4] is mostly ironed out;
>>> > Jul, 8 - Oslo spec [4] is merged, Neutron 

Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-04 Thread Armando M.
Hi,

When I think about Group-Based Policy I cannot help myself but think about
the degree of variety of sentiments (for lack of better words) that this
subject has raised over the past few months on the mailing list and/or
other venues.

I speak for myself when I say that when I look at the end-to-end
Group-Based Policy functionality I am not entirely sold on the following
points:

- The abstraction being proposed, its relationship with the Neutron API and
ODL;
- The way the reference implementation has been introduced into the
OpenStack world, and Neutron in particular;
- What an evolution of Group-Based Policy means going forward if we use the
proposed approach as a foundation for a more application-friendly and
intent-driven API abstraction going forward.
- The way we used development tools for bringing Neutron developers
(reviewers and committers), application developers, operators, and users
together around these new concepts.

Can I speak for everybody when I say that we do not have a consensus across
the board on all/some/other points being touched in this thread or other
threads? I think I can: I have witnessed that there is *NOT* such a
consensus. If I am asked where I stand, my position is that I wouldn't mind
to see how Group-Based Policy as we know it kick the tires; would I love to
see it do that in a way that's not disruptive to the Neutron project? YES,
I would love to.

So, where do we go from here? Do we need a consensus on such a delicate
area? I think we do.

I think Mark's intent, or anyone's who has at his/her heart the interest of
the Neutron community as a whole, is to make sure that we find a compromise
which everyone is comfortable with.

Do we vote about what we do next? Do we leave just cores to vote? I am not
sure. But one thing is certain, we cannot keep procrastinating as the Juno
window is about to expire.

I am sure that there are people hitching to get their hands on Group-Based
Policy, however the vehicle whereby this gets released should be irrelevant
to them; at the same time I appreciate that some people perceive Stackforge
projects as not as established and mature as other OpenStack projects; that
said wouldn't be fair to say that Group-Based Policy is exactly that? If
this means that other immature abstractions would need to follow suit, I
would be all in for this more decentralized approach. Can we do that now,
or do we postpone this discussion for the Kilo Summit? I don't know.

I realize that I have asked more questions than the answers I tried to
give, but I hope we can all engage in a constructive discussion.

Cheers,
Armando

PS: Salvatore I expressly stayed away from the GBP acronym you love so
much, so please read the thread and comment on it :)

On 4 August 2014 15:54, Ivar Lazzaro  wrote:

> +1 Hemanth.
>
>
> On Tue, Aug 5, 2014 at 12:24 AM, Hemanth Ravi 
> wrote:
>
>> Hi,
>>
>> I believe that the API has been reviewed well both for its usecases and
>> correctness. And the blueprint has been approved after sufficient exposure
>> of the API in the community. The best way to enable users to adopt GBP is
>> to introduce this in Juno rather than as a project in StackForge. Just as
>> in other APIs any evolutionary changes can be incorporated, going forward.
>>
>> OS development processes are being followed in the implementation to make
>> sure that there is no negative impact on Neutron stability with the
>> inclusion of GBP.
>>
>> Thanks,
>> -hemanth
>>
>>
>> On Mon, Aug 4, 2014 at 1:27 PM, Mark McClain 
>> wrote:
>>
>>>  All-
>>>
>>> tl;dr
>>>
>>> * Group Based Policy API is the kind of experimentation we be should
>>> attempting.
>>> * Experiments should be able to fail fast.
>>> * The master branch does not fail fast.
>>> * StackForge is the proper home to conduct this experiment.
>>>
>>>
>>> Why this email?
>>> ---
>>> Our community has been discussing and working on Group Based Policy
>>> (GBP) for many months.  I think the discussion has reached a point where we
>>> need to openly discuss a few issues before moving forward.  I recognize
>>> that this discussion could create frustration for those who have invested
>>> significant time and energy, but the reality is we need ensure we are
>>> making decisions that benefit all members of our community (users,
>>> operators, developers and vendors).
>>>
>>> Experimentation
>>> 
>>> I like that as a community we are exploring alternate APIs.  The process
>>> of exploring via real user experimentation can produce valuable results.  A
>>> good experiment should be designed to fail fast to enable further trials
>>> via rapid iteration.
>>>
>>> Merging large changes into the master branch is the exact opposite of
>>> failing fast.
>>>
>>> The master branch deliberately favors small iterative changes over time.
>>>  Releasing a new version of the proposed API every six months limits our
>>> ability to learn and make adjustments.
>>>
>>> In the past, we’ve released LBaaS, FWaaS, and VPNaa

Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Armando M.
This thread is moving so fast I can't keep up!

The fact that troubles me is that I am unable to grasp how we move forward,
which was the point of this thread to start with. It seems we have 2
options:

- We make GBP to merge as is, in the Neutron tree, with some minor revision
(e.g. naming?);
- We make GBP a stackforge project, that integrates with Neutron in some
shape or form;

Another option, might be something in between, where GBP is in tree, but in
some sort of experimental staging area (even though I am not sure how well
baked this idea is).

Now, as a community we all need make a decision; arguing about the fact
that the blueprint was approved is pointless. As a matter of fact, I think
that blueprint should be approved, if and only if the code has landed
completely, but I digress!

Let's together come up with pros and cons of each approach and come up with
an informed decision.

Just reading free form text, how are we expected to do that? At least I
can't!

My 2c.
Armando


On 6 August 2014 15:03, Aaron Rosen  wrote:

>
>
>
> On Wed, Aug 6, 2014 at 12:46 PM, Kevin Benton  wrote:
>
>> >I believe the referential security group rules solve this problem
>> (unless I'm not understanding):
>>
>> I think the disconnect is that you are comparing the way to current
>> mapping driver implements things for the reference implementation with the
>> existing APIs. Under this light, it's not going to look like there is a
>> point to this code being in Neutron since, as you said, the abstraction
>> could happen at a client. However, this changes once new mapping drivers
>> can be added that implement things differently.
>>
>> Let's take the security groups example. Using the security groups API
>> directly is imperative ("put a firewall rule on this port that blocks this
>> IP") compared to a higher level declarative abstraction ("make sure these
>> two endpoints cannot communicate"). With the former, the ports must support
>> security groups and there is nowhere except for the firewall rules on that
>> port to implement it without violating the user's expectation. With the
>> latter, a mapping driver could determine that communication between these
>> two hosts can be prevented by using an ACL on a router or a switch, which
>> doesn't violate the user's intent and buys a performance improvement and
>> works with ports that don't support security groups.
>>
>> Group based policy is trying to move the requests into the declarative
>> abstraction so optimizations like the one above can be made.
>>
>
> Hi Kevin,
>
> Interesting points. Though, let me ask this. Why do we need to move to a
> declarative API abstraction in neutron in order to perform this
> optimization on the backend? For example, In the current neutron model say
> we want to create a port with a security group attached to it called web
> that allows TCP:80 in and members who are in a security group called
> database. From this mapping I fail to see how it's really any different
> from the declarative model? The ports in neutron are logical abstractions
> and the backend system could be implemented in order to determine that the
> communication between these two hosts could be prevented by using an ACL on
> a router or switch as well.
>
> Best,
>
> Aaron
>
>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Armando M.
On 6 August 2014 15:47, Kevin Benton  wrote:

> I think we should merge it and just prefix the API for now with
> '/your_application_will_break_after_juno_if_you_use_this/'
>

And you make your call based and what pros and cons exactly, If I am ask?

Let me start:

Option 1:
  - pros
- immediate delivery vehicle for consumption by operators
  - cons
- code is burder from a number of standpoints (review, test, etc)

Option 2:
  - pros
- enable a small set of Illuminati to iterate faster
  - cons
- integration burden with other OpenStack projects (keystone, nova,
neutron, etc)

Cheers,
Armando
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Armando M.
>
> This is probably not intentional from your part ,but your choice of words
> make it seem that you are deriding the efforts of the team behind this
> effort. While i may disagree technically here and there with their current
> design, it seems to me that the effort in question is rather broad based in
> terms of support (from multiple different organizations) and that the team
> has put a non trivial effort in making the effort public. I don't think we
> can characterize the team either as a "secret group" or a "small set".
>

You misread me completely, please refrain from making these comments: I
deride no-one.

I chose the word in reference to the Enlightenment movement, with emphasis
to breaking the traditional way of thinking (declarative vs imperative),
and I found that the analogy would stick, but apparently not.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-06 Thread Armando M.
On 6 August 2014 17:34, Prasad Vellanki 
wrote:

> It seems like Option  1 would be preferable. User can use this right away.
>
>
People choosing Option 1 may think that the shortest route may be the best,
that said the drawback I identified is not to be dismissed either (and I am
sure there many more pros/cons): an immature product is of good use to
no-one, and we still have the nova parity that haunts us.

I think this could be another reason why people associated GBP and
nova-network parity in this thread: the fact that new abstractions are
introduced without solidifying the foundations of the project is a risk to
GBP as well as Neutron itself.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][QA] Enabling full neutron Job

2014-08-07 Thread Armando M.
Hi Salvatore,

I did notice the issue and I flagged this bug report:

https://bugs.launchpad.net/nova/+bug/1352141

I'll follow up.

Cheers,
Armando


On 7 August 2014 01:34, Salvatore Orlando  wrote:

> I had to put the patch back on WIP because yesterday a bug causing a 100%
> failure rate slipped in.
> It should be an easy fix, and I'm already working on it.
> Situations like this, exemplified by [1] are a bit frustrating for all the
> people working on improving neutron quality.
> Now, if you allow me a little rant, as Neutron is receiving a lot of
> attention for all the ongoing discussion regarding this group policy stuff,
> would it be possible for us to receive a bit of attention to ensure both
> the full job and the grenade one are switched to voting before the juno-3
> review crunch.
>
> We've already had the attention of the QA team, it would probably good if
> we could get the attention of the infra core team to ensure:
> 1) the jobs are also deemed by them stable enough to be switched to voting
> 2) the relevant patches for openstack-infra/config are reviewed
>
> Regards,
> Salvatore
>
> [1]
> http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwie3UnbWVzc2FnZSc6IHUnRmxvYXRpbmcgaXAgcG9vbCBub3QgZm91bmQuJywgdSdjb2RlJzogNDAwfVwiIEFORCBidWlsZF9uYW1lOlwiY2hlY2stdGVtcGVzdC1kc3ZtLW5ldXRyb24tZnVsbFwiIEFORCBidWlsZF9icmFuY2g6XCJtYXN0ZXJcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiMTcyODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwNzQwMDExMDIwNywibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==
>
>
> On 23 July 2014 14:59, Matthew Treinish  wrote:
>
>> On Wed, Jul 23, 2014 at 02:40:02PM +0200, Salvatore Orlando wrote:
>> > Here I am again bothering you with the state of the full job for
>> Neutron.
>> >
>> > The patch for fixing an issue in nova's server external events extension
>> > merged yesterday [1]
>> > We do not have yet enough data points to make a reliable assessment,
>> but of
>> > out 37 runs since the patch merged, we had "only" 5 failures, which puts
>> > the failure rate at about 13%
>> >
>> > This is ugly compared with the current failure rate of the smoketest
>> (3%).
>> > However, I think it is good enough to start making the full job voting
>> at
>> > least for neutron patches.
>> > Once we'll be able to bring down failure rate to anything around 5%, we
>> can
>> > then enable the job everywhere.
>>
>> I think that sounds like a good plan. I'm also curious how the failure
>> rates
>> compare to the other non-neutron jobs, that might be a useful comparison
>> too
>> for deciding when to flip the switch everywhere.
>>
>> >
>> > As much as I hate asymmetric gating, I think this is a good compromise
>> for
>> > avoiding developers working on other projects are badly affected by the
>> > higher failure rate in the neutron full job.
>>
>> So we discussed this during the project meeting a couple of weeks ago [3]
>> and
>> there was a general agreement that doing it asymmetrically at first would
>> be
>> better. Everyone should be wary of the potential harms with doing it
>> asymmetrically and I think priority will be given to fixing issues that
>> block
>> the neutron gate should they arise.
>>
>> > I will therefore resume work on [2] and remove the WIP status as soon
>> as I
>> > can confirm a failure rate below 15% with more data points.
>> >
>>
>> Thanks for keeping on top of this Salvatore. It'll be good to finally be
>> at
>> least partially gating with a parallel job.
>>
>> -Matt Treinish
>>
>> >
>> > [1] https://review.openstack.org/#/c/103865/
>> > [2] https://review.openstack.org/#/c/88289/
>> [3]
>> http://eavesdrop.openstack.org/meetings/project/2014/project.2014-07-08-21.03.log.html#l-28
>>
>> >
>> >
>> > On 10 July 2014 11:49, Salvatore Orlando  wrote:
>> >
>> > >
>> > >
>> > >
>> > > On 10 July 2014 11:27, Ihar Hrachyshka  wrote:
>> > >
>> > >> -BEGIN PGP SIGNED MESSAGE-
>> > >> Hash: SHA512
>> > >>
>> > >> On 10/07/14 11:07, Salvatore Orlando wrote:
>> > >> > The patch for bug 1329564 [1] merged about 11 hours ago. From [2]
>> > >> > it seems there has been an improvement on the failure rate, which
>> > >> > seem to have dropped to 25% from over 40%. Still, since the patch
>> > >> > merged there have been 11 failures already in the full job out of
>> > >> > 42 jobs executed in total. Of these 11 failures: - 3 were due to
>> > >> > problems in the patches being tested - 1 had the same root cause as
>> > >> > bug 1329564. Indeed the related job started before the patch merged
>> > >> > but finished after. So this failure "doesn't count". - 1 was for an
>> > >> > issue introduced about a week ago which actually causing a lot of
>> > >> > failures in the full job [3]. Fix should be easy for it; however
>> > >> > given the nature of the test we might even skip it while it's
>> > >> > fixed. - 3 were for bug 1333654 [4]; for this bug discussion is
>> > >> > going on on gerrit regarding the most suitable approach. - 3 were
>> > >> > fo

[openstack-dev] [Neutron] Gerrit permissions and Merge rights

2015-10-20 Thread Armando M.
Hi folks,

During revision of the Neutron teams [1], we made clear that the
neutron-specs repo is to be targeted by specs for all the Neutron projects
(core + *-aas).

For this reason I made sure that the neutron-specs-core team +2 right was
extended to all the core teams.

Be mindful, use your +2 rights with care: if you are core on a *-aas
project, you should exercise that vote only for specs that pertain the
project you're core of.

If I could use this email as a reminder also of the core hierarchy and
lieutenant system we switched to in Liberty ([3]): if you have been made
core by a lieutenant of a sub-system, please use your +2/+A only within
your area of comfort and reach out for help if in doubt.

Reviews are always welcome though!

Cheers,
Armando

[1] https://review.openstack.org/#/c/237180/
[2] https://review.openstack.org/#/admin/groups/314,members
[3]
http://docs.openstack.org/developer/neutron/policies/neutron-teams.html#core-review-hierarchy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] HenryG addition to the Neutron Drivers team

2015-10-20 Thread Armando M.
Hi folks,

Henry has been instrumental in many areas of the projects and his crazy
working hours makes even Kevin and I bow in awe.

Jokes aside, I would like to announce HenryG as a new member of the Neutron
Drivers team.

Having a propension to attendance, and desire to review of RFEs puts you on
the right foot to join the group, whose members are rotated regularly so
that everyone is given the opportunity to grow, and no-one burns out.

The team [1] meets regularly on Tuesdays [2], and anyone is welcome to
attend.

Please, join me in welcome Henry to the team.

Cheers,
Armando

[1]
http://docs.openstack.org/developer/neutron/policies/neutron-teams.html#drivers-team
[2] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Do not merge until further notice

2015-10-20 Thread Armando M.
On 20 October 2015 at 19:46, Takashi Yamamoto  wrote:

> i missed the "further notice"?
>

No, you didn't. RC3 released, Liberty released, the world moved on and I
didn't think of sending an email. Sorry.


>
> On Wed, Oct 14, 2015 at 4:07 AM, Armando M.  wrote:
> > Hi folks,
> >
> > We are in the last hours of Liberty, let's pause for a second and
> consider
> > merging patches only if absolutely necessary. The gate is getting clogged
> > and we need to give priority to potential RC3 fixes or gate stability
> fixes.
> >
> > Thanks,
> > Armando
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Gerrit permissions and Merge rights

2015-10-21 Thread Armando M.
On 21 October 2015 at 04:12, Gal Sagie  wrote:

> Do we also want to consider Project Kuryr part of this?
>

No, why would we?


> We already started sending Kuryr spec to the Neutron repository and I
> think it would make sense to manage it
> as part of Neutron spec process.
>

No, unless what you are asking are changes to the core. Do you have a
reference for me to look at?


>
> Any opinions on that?
>
> Gal.
>
> On Tue, Oct 20, 2015 at 11:10 PM, Armando M.  wrote:
>
>> Hi folks,
>>
>> During revision of the Neutron teams [1], we made clear that the
>> neutron-specs repo is to be targeted by specs for all the Neutron projects
>> (core + *-aas).
>>
>> For this reason I made sure that the neutron-specs-core team +2 right was
>> extended to all the core teams.
>>
>> Be mindful, use your +2 rights with care: if you are core on a *-aas
>> project, you should exercise that vote only for specs that pertain the
>> project you're core of.
>>
>> If I could use this email as a reminder also of the core hierarchy and
>> lieutenant system we switched to in Liberty ([3]): if you have been made
>> core by a lieutenant of a sub-system, please use your +2/+A only within
>> your area of comfort and reach out for help if in doubt.
>>
>> Reviews are always welcome though!
>>
>> Cheers,
>> Armando
>>
>> [1] https://review.openstack.org/#/c/237180/
>> [2] https://review.openstack.org/#/admin/groups/314,members
>> [3]
>> http://docs.openstack.org/developer/neutron/policies/neutron-teams.html#core-review-hierarchy
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best Regards ,
>
> The G.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] HenryG addition to the Neutron Drivers team

2015-10-21 Thread Armando M.
On 21 October 2015 at 02:01, Ihar Hrachyshka  wrote:

>
> > On 21 Oct 2015, at 05:14, Armando M.  wrote:
> >
> > Hi folks,
> >
> > Henry has been instrumental in many areas of the projects and his crazy
> working hours makes even Kevin and I bow in awe.
> >
> > Jokes aside, I would like to announce HenryG as a new member of the
> Neutron Drivers team.
> >
> > Having a propension to attendance, and desire to review of RFEs puts you
> on the right foot to join the group, whose members are rotated regularly so
> that everyone is given the opportunity to grow, and no-one burns out.
> >
> > The team [1] meets regularly on Tuesdays [2], and anyone is welcome to
> attend.
> >
> > Please, join me in welcome Henry to the team.
>
> Nice addition. :)
>
> Do we have criteria for neutron-drivers team members documented? Or is it
> a mere ‘regularly attending the meetings, be mindful and apply common
> sense’?
>

I knew someone was going to ask that. I'll elaborate further, I already
have something in WIP.

Cheers,
Armando


>
> Ihar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Gerrit permissions and Merge rights

2015-10-21 Thread Armando M.
On 21 October 2015 at 09:53, Kyle Mestery  wrote:

> On Wed, Oct 21, 2015 at 11:37 AM, Armando M.  wrote:
>
>>
>>
>> On 21 October 2015 at 04:12, Gal Sagie  wrote:
>>
>>> Do we also want to consider Project Kuryr part of this?
>>>
>>
>> No, why would we?
>>
>>
> The reason to consider it is because Kuryr is a sub-project of Neutron,
> and they are doing their spec submissions following the Neutron guidelines.
> Adding the kuryr-core gerrit group to be on part with the *aas repos makes
> sense here. If other sub-projects (like L2FW, SFC, etc.) start doing spec
> reviews in the neutron-specs repository, then adding them makes sense too.
>

I don't believe this is the road we set ourselves on when we started the
decomp/stadium. We wanted a clear separation of concerns and I don't see
how going down this path is going to help us achieve that.

I don't see the grounds to have such an abrupt change in direction right
now, especially for the level of work that that would imply and the
pressure that would put on the drivers team. Anyone is free to review and
contribute where it matters for them, and location should not prevent them
from doing so.


>
>
>> We already started sending Kuryr spec to the Neutron repository and I
>>> think it would make sense to manage it
>>> as part of Neutron spec process.
>>>
>>
>> No, unless what you are asking are changes to the core. Do you have a
>> reference for me to look at?
>>
>>
> See above, perhaps I answered this for you.
>

>
>>
>>> Any opinions on that?
>>>
>>> Gal.
>>>
>>> On Tue, Oct 20, 2015 at 11:10 PM, Armando M.  wrote:
>>>
>>>> Hi folks,
>>>>
>>>> During revision of the Neutron teams [1], we made clear that the
>>>> neutron-specs repo is to be targeted by specs for all the Neutron projects
>>>> (core + *-aas).
>>>>
>>>> For this reason I made sure that the neutron-specs-core team +2 right
>>>> was extended to all the core teams.
>>>>
>>>> Be mindful, use your +2 rights with care: if you are core on a *-aas
>>>> project, you should exercise that vote only for specs that pertain the
>>>> project you're core of.
>>>>
>>>> If I could use this email as a reminder also of the core hierarchy and
>>>> lieutenant system we switched to in Liberty ([3]): if you have been made
>>>> core by a lieutenant of a sub-system, please use your +2/+A only within
>>>> your area of comfort and reach out for help if in doubt.
>>>>
>>>> Reviews are always welcome though!
>>>>
>>>> Cheers,
>>>> Armando
>>>>
>>>> [1] https://review.openstack.org/#/c/237180/
>>>> [2] https://review.openstack.org/#/admin/groups/314,members
>>>> [3]
>>>> http://docs.openstack.org/developer/neutron/policies/neutron-teams.html#core-review-hierarchy
>>>>
>>>>
>>>> __
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>
>>>
>>> --
>>> Best Regards ,
>>>
>>> The G.
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Gerrit permissions and Merge rights

2015-10-21 Thread Armando M.
On 21 October 2015 at 09:52, Gal Sagie  wrote:

>
>
> On Wed, Oct 21, 2015 at 7:37 PM, Armando M.  wrote:
>
>>
>>
>> On 21 October 2015 at 04:12, Gal Sagie  wrote:
>>
>>> Do we also want to consider Project Kuryr part of this?
>>>
>>
>> No, why would we?
>>
>
> [Gal] Because Kuryr is a special project which was created in order to
> expose Neutron and its services to containers networking,
>  its mission (at least as defined right now) is to bridge the gaps
> between containers networking world and Neutron and for doing it
>  it already depends on the feature/spec process of Neutron.
>  That is why it make sense to me that just like the services
> projects, our spec approval process will be handled
>  as part of Neutron.
>

Kuryr is no more special than any other Neutron affiliated project, let's
be 100% clear about that: there is no double-standard here.

If you really think that Kuryr should be an integral part of the Neutron
project, then it should not exist, but fold within Neutron entirely. I
don't know the backstory of why this was spun off as a separate project in
the first place, but I think that there's some merits in having it as
standalone entity.

You are always welcome to reach out for feedback, after all the same people
may work/have an interest in multiple projects (fingers stuck in many pies
if you will :P), but going from there to what you're proposing is too big
of a leap which I find hard to justify.


>
>
>
>>
>>
>>> We already started sending Kuryr spec to the Neutron repository and I
>>> think it would make sense to manage it
>>> as part of Neutron spec process.
>>>
>>
>> No, unless what you are asking are changes to the core. Do you have a
>> reference for me to look at?
>>
>>
>   [Gal]   I dont understand what you mean "No" here, first this spec is
> sent to Mitaka:
>  https://review.openstack.org/#/c/213490/
>
> And as i mentioned above Kuryr spec process depends on Neutron
> (and the specs that are sent
> to Neutron core)
>

I'll review it and provide feedback. 'Depending on Neutron' means requiring
actual enhancement to the core platform that makes sense to be
tracked/discussed in Neutron. Everything else can be tracked independently:
this is the separation of concerns that we should strive for.


>
>
>>> Any opinions on that?
>>>
>>> Gal.
>>>
>>> On Tue, Oct 20, 2015 at 11:10 PM, Armando M.  wrote:
>>>
>>>> Hi folks,
>>>>
>>>> During revision of the Neutron teams [1], we made clear that the
>>>> neutron-specs repo is to be targeted by specs for all the Neutron projects
>>>> (core + *-aas).
>>>>
>>>> For this reason I made sure that the neutron-specs-core team +2 right
>>>> was extended to all the core teams.
>>>>
>>>> Be mindful, use your +2 rights with care: if you are core on a *-aas
>>>> project, you should exercise that vote only for specs that pertain the
>>>> project you're core of.
>>>>
>>>> If I could use this email as a reminder also of the core hierarchy and
>>>> lieutenant system we switched to in Liberty ([3]): if you have been made
>>>> core by a lieutenant of a sub-system, please use your +2/+A only within
>>>> your area of comfort and reach out for help if in doubt.
>>>>
>>>> Reviews are always welcome though!
>>>>
>>>> Cheers,
>>>> Armando
>>>>
>>>> [1] https://review.openstack.org/#/c/237180/
>>>> [2] https://review.openstack.org/#/admin/groups/314,members
>>>> [3]
>>>> http://docs.openstack.org/developer/neutron/policies/neutron-teams.html#core-review-hierarchy
>>>>
>>>>
>>>> __
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>
>>>
>>> --
>>> Best Regards ,
>>>
>>> The G.
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:uns

Re: [openstack-dev] [Neutron] Gerrit permissions and Merge rights

2015-10-21 Thread Armando M.
On 21 October 2015 at 10:29, Kyle Mestery  wrote:

> On Wed, Oct 21, 2015 at 12:08 PM, Armando M.  wrote:
>
>>
>>
>> On 21 October 2015 at 09:53, Kyle Mestery  wrote:
>>
>>> On Wed, Oct 21, 2015 at 11:37 AM, Armando M.  wrote:
>>>
>>>>
>>>>
>>>> On 21 October 2015 at 04:12, Gal Sagie  wrote:
>>>>
>>>>> Do we also want to consider Project Kuryr part of this?
>>>>>
>>>>
>>>> No, why would we?
>>>>
>>>>
>>> The reason to consider it is because Kuryr is a sub-project of Neutron,
>>> and they are doing their spec submissions following the Neutron guidelines.
>>> Adding the kuryr-core gerrit group to be on part with the *aas repos makes
>>> sense here. If other sub-projects (like L2FW, SFC, etc.) start doing spec
>>> reviews in the neutron-specs repository, then adding them makes sense too.
>>>
>>
>> I don't believe this is the road we set ourselves on when we started the
>> decomp/stadium. We wanted a clear separation of concerns and I don't see
>> how going down this path is going to help us achieve that.
>>
>> I don't see the grounds to have such an abrupt change in direction right
>> now, especially for the level of work that that would imply and the
>> pressure that would put on the drivers team. Anyone is free to review and
>> contribute where it matters for them, and location should not prevent them
>> from doing so.
>>
>>
> I was merely implying that since these projects are part of neutron, and
> they have specs, keeping them in one place makes sense. And by doing that,
> we'd need to give them +2 powers for their core reviewers. But, I'm fine
> with leaving things the way they are and having them put their specs in
> their devref. But we should update the devref in Neutron to reflect this,
> e.g. that we don't expect specs in neutron-specs for things outside
> [neutron, neutron-fwaas, neutron-lbaas, neutron-vpnaas].
>
>

IMO, it's pretty clear from here [1], which I revised in the context of
[2]. Not sure if there's anything else that's left to misunderstanding.

[1]
http://docs.openstack.org/developer/neutron/policies/neutron-teams.html#neutron-specs-core-reviewer-team
[2] https://review.openstack.org/#/c/237180/



>
>>>
>>>> We already started sending Kuryr spec to the Neutron repository and I
>>>>> think it would make sense to manage it
>>>>> as part of Neutron spec process.
>>>>>
>>>>
>>>> No, unless what you are asking are changes to the core. Do you have a
>>>> reference for me to look at?
>>>>
>>>>
>>> See above, perhaps I answered this for you.
>>>
>>
>>>
>>>>
>>>>> Any opinions on that?
>>>>>
>>>>> Gal.
>>>>>
>>>>> On Tue, Oct 20, 2015 at 11:10 PM, Armando M. 
>>>>> wrote:
>>>>>
>>>>>> Hi folks,
>>>>>>
>>>>>> During revision of the Neutron teams [1], we made clear that the
>>>>>> neutron-specs repo is to be targeted by specs for all the Neutron 
>>>>>> projects
>>>>>> (core + *-aas).
>>>>>>
>>>>>> For this reason I made sure that the neutron-specs-core team +2 right
>>>>>> was extended to all the core teams.
>>>>>>
>>>>>> Be mindful, use your +2 rights with care: if you are core on a *-aas
>>>>>> project, you should exercise that vote only for specs that pertain the
>>>>>> project you're core of.
>>>>>>
>>>>>> If I could use this email as a reminder also of the core hierarchy
>>>>>> and lieutenant system we switched to in Liberty ([3]): if you have been
>>>>>> made core by a lieutenant of a sub-system, please use your +2/+A only
>>>>>> within your area of comfort and reach out for help if in doubt.
>>>>>>
>>>>>> Reviews are always welcome though!
>>>>>>
>>>>>> Cheers,
>>>>>> Armando
>>>>>>
>>>>>> [1] https://review.openstack.org/#/c/237180/
>>>>>> [2] https://review.openstack.org/#/admin/groups/314,members
>>>>>> [3]
>>>>>> http://docs.openstack.org/developer/neutron/policies/neutron-teams.html#core-review-hierarchy
>>>>>>
>>&

Re: [openstack-dev] [Neutron][Nova] Trunk port feature (VLAN aware VMs)

2015-10-21 Thread Armando M.
On 21 October 2015 at 15:40, Ildikó Váncsa 
wrote:

> Hi Folks,
>
> During Liberty we started the implementation of the VLAN aware VMs
> blueprint (https://review.openstack.org/#/c/94612/). We had quite a good
> progress, although we could use some extra hands on Neutron side and some
> thoughts on the Nova-Neutron interaction aspect of the feature.
>
> The status of the code can be checked here:
>
> https://review.openstack.org/#/q/status:open+project:openstack/neutron+branch:master+topic:bp/vlan-aware-vms,n,z
>
> https://review.openstack.org/#/q/status:open+project:openstack/python-neutronclient+branch:master+topic:bp/vlan-aware-vms,n,z
> https://review.openstack.org/#/c/213120/
> The spec proposed for Nova can be found here:
> https://review.openstack.org/#/c/213644/
>
>
> We also added a note to the corresponding cross-project session to discuss
> further the impacts of this feature on Nova:
>
> https://mitakadesignsummit.sched.org/event/c2292316e85e922a9a649191ad1e0160#.VigTqpeLJ4M
>
> https://etherpad.openstack.org/p/mitaka-neutron-core-cross-project-integration
>
> If you are interested in this feature and would like to help out please
> let me know. If you will be in Tokyo, we can catch up during/after the
> cross-project session or set up a separate discussion to move forward and
> speed up the feature implementation.
>

Hi,

Thanks for the email. We discussed blueprint [1] during the last IRC
meeting [2] and based on our latest blueprint procedures [3], Rossella has
volunteered to help you through the process. She is going to be the main
point of contact for anything related to the feature. We'll watch the
progress of the blueprint over the course of the cycle and the meeting
participation is encouraged to raise/discuss blockers.

HTH
Armando

[1] https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms
[2]
http://eavesdrop.openstack.org/meetings/networking/2015/networking.2015-10-20-14.00.log.html
[3]
http://docs.openstack.org/developer/neutron/policies/blueprints.html#neutron-request-for-feature-enhancements


> Thanks and Best Regards,
> Ildikó
> (IRC: ildikov)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][sriov] SRIOV-VM could not work well with normal VM

2015-10-22 Thread Armando M.
On 22 October 2015 at 01:21, yujie  wrote:

> I used ixgbe and vlan, passthrough a VF to vm.
> After the VM created, it could not connect to VM on the same compute node
> without use sriov.
>
>
Not sure if this is the same conversation happening on Launchpad, but if
not, this might be relevant:

https://bugs.launchpad.net/neutron/+bug/1506003


> 在 2015/10/22 10:58, Alexander Duyck 写道:
>
>> I assume by Intel cards you mean something that is running ixgbe?  If so
>> and you are trying to use SR-IOV with OVS and VLANs running on top of
>> the PF it will fail. The issue is that OVS requires the ability to place
>> the PF in promiscuous mode to support VLAN trunking, and ixgbe driver
>> prevents that when SR-IOV is enabled.
>>
>> The "bridge fdb add" approach mentioned should work as long as ixgbe PF
>> is used on a flat network.
>>
>> - Alex
>>
>> On 10/19/2015 07:33 PM, yujie wrote:
>>
>>> Hi Moshe Levi,
>>>Sorry for replying to this message after so long time. The testing
>>> environment was unavailable before.
>>>I use Intel cards, but could only tested base kilo and vlan. Could
>>> it work?
>>>
>>> 在 2015/9/22 13:24, Moshe Levi 写道:
>>>
 Hi Yujie,

 There is a patch https://review.openstack.org/#/c/198736/ which I
 wrote to add the mac of the normal instance to
 the SR-IOV embedded switch so that the packet will go to the PF
 instead of going to the wire.
 This is done by using bridge tool with the command "bridge fdb add
  dev "

 I was able to test it on Mellanox ConnectX3  card with both vlan and
 flat network and it worked fine.
 I wasn't able to test it on any of the Intel cards, but I was told
 the it only working on flat network, in vlan network the Intel card
 is dropping the tagged packets and they are not go up to the VF.

 What NIC are you using? Can you try using "bridge fdb add  dev
 " where  is the mac of the normal vm and 
 is the PF
 and see if  that resolve the issue.
 Also can you check it with  flat and vlan networks.


 -Original Message-
 From: yujie [mailto:judy_yu...@126.com]
 Sent: Tuesday, September 22, 2015 6:28 AM
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [neutron][sriov] SRIOV-VM could not work
 well with normal VM

 Hi all,
 I am using neutron kilo without dvr to create sriov instance VM-A,it
 works well and could connect to its gateway fine.
 But when I let the normal instance VM-B which in the same
 compute-node with VM-A ping its gateway, it failed. I capture the
 packet on the network-node, find the gateway already reply the
 ARP-reply message to VM-B. But compute-node which VM-B lives could
 not send the package to VM-B.
 If delete VM-A and set : echo 0 >
 /sys/class/enp5s0f0/device/sriov_numvfs, the problem solved.

 Is it a same question with the bug: SR-IOV port doesn't reach OVS
 port on same compute node ?
 https://bugs.launchpad.net/neutron/+bug/1492228
 Any suggestions will be grateful.

 Thanks,
 Yujie



 __

 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __

 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> __
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Summit session on Lighting talks

2015-10-22 Thread Armando M.
Hi folks,

We currently have the submissions and only 6 slots. There's still some time
left before the end of this week.

Whoever put an entry in the etherpad [1], please consider adding your name,
otherwise we don't know who to reach out during the session [2]. If all
things stay the same, we'll pick the 6 ones that have a name assigned to
them.

Cheers,
Armando


   - Results presentation for the Nova Networks/Nuetron migration survey.
   (piet)


   - Dragonflow, Liberty release and beyond - gsagie


   - DSCP QoS implementation - njohnston, vhoward


   - Adding BGP EVPN Support to BGP Dynamic Routing, for Dynamic Separation
   of Internet andTenant External Traffic - mickeys


   - BGP dynamic routing status


   - BGPVPN project status


   - L2GW as inter-cloud connectivity entity (boarder gateway) - gsagie


[1] https://etherpad.openstack.org/p/mitaka-neutron-labs-lighting-talks
[2]
http://mitakadesignsummit.sched.org/event/2233580b895bc50617a06d7795d8e562#.VikNMmSrR-U
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] IRC weekly meeting

2015-10-29 Thread Armando M.
A reminder that we won't have the meeting next week.

Safe journey back from Tokyo, for who has travelled to the Summit.

Cheers,
Armadno
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   3   4   5   6   7   >