Re: [openstack-dev] floating IP is DOWN

2016-09-22 Thread Salvatore Orlando
Probably that LOG statement is a line added for debugging purposes.
There are several probable causes for a floating ip being down. If you see
any traceback in the neutron server or l3-agent that will probably
immediately reveal the root cause.

On the other hand, lack of any traceback might indicate communication
issues between the server and the l3 agent.

Salvatore

On 22 September 2016 at 16:53, Brian Haley  wrote:

> On 09/22/2016 10:19 AM, Barber, Ofer wrote:
>
>> when i assign a floating IP to a server, i see that the status of the
>> floating
>> IP is "down"
>>
>> why is that so ?
>>
>> *_code:_*
>>
>> LOG.info("\n<== float IP address: %s and status: %s  ==>" %
>> (float_ip['floating_ip_address'],float_ip['status']))
>>
>> *_Output:_*
>>
>> <== float IP address: 10.63.101.225 and status: DOWN  ==>
>>
>
> I couldn't find that code anywhere, what release was this on?
>
> From a Newton-based system created yesterday, this is the message in the
> l3-agent log when I associate a floating IP:
>
> Floating ip 4c1b4571-a003-43f2-96a1-f7073cd1319d added, status ACTIVE
>
> -Brian
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] [Neutron] Waiting until Neutron Port isActive

2016-06-13 Thread Salvatore Orlando
As for the notifier proposed above it is correct that neutron needs to be
changed. This should not be a massive amount of work. Today it works with
nova only pretty much because nova it's the only compute service it
interacts with.

The question brought aboud ping vs operational status is a very good one.
In neutron status=UP for a port only means that L2 wiring (at least for
most plugins) occurred on the port. Networking might not yet be fully ready.
I know some plugins - like ML2 - are adding (or have recently added)
mechanism to improve this situation.

Pinging a port might seem the most reliable way of knowing whether a port
is up but this has issues:
- false positives (or negatives according to which event you are trying to
verify!)
- security groups getting in the way
- need to be able to reach container interfaces, which might lead to have
"health checking agents" to implement this.

I think that if:
- you are not using DHCP
- you can clear identify the sets of ports you are waiting on
- you are using the ML2-based reference implementation (or any other impl
which does not do round-trips to the backend on GET operations)

You should be ok with polling. I'm not sure however if a backoff mechanisms
is applicable in this case.

Salvatore




On 13 June 2016 at 21:00, Rick Jones  wrote:

> On 06/10/2016 03:13 PM, Kevin Benton wrote:
>
>> Polling should be fine. get_port operations a relatively cheap operation
>> for Neutron.
>>
>
> Just in principle, I would suggest this polling have a back-off built into
> it.  Poll once, see the port is not yet "up" - wait a semi-random short
> length of time,  poll again, see it is not yet "up" wait a longer
> semi-random length of time, lather, rinse, repeat until you've either
> gotten to the limits of your patience or the port has become "up."
>
> Fixed, short poll intervals can run the risk of congestive collapse "at
> scale."
>
> rick jones
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] [Neutron] Waiting until Neutron Port is Active

2016-06-08 Thread Salvatore Orlando
Neutron has the ability already of sending an event as a REST call to
notify a third party that a port became active [1]
This is used by Nova to hold on booting instances until network has been
wired.
Perhaps kuryr could leverage this without having to tap into the AMQP bus,
as that would be implementation-specific - since there would be an
assumption about having a plugin that communicates with the reference impl
l2 agent.

Salvatore

[1]
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/notifiers/nova.py



On 8 June 2016 at 17:23, Mohammad Banikazemi  wrote:

> For the Kuryr project, in order to support blocking until vifs are plugged
> in (that is adding config options similar to the following options define
> in Nova: vif_plugging_is_fatal and vif_plugging_timeout), we need to detect
> that the Neutron plugin being used is done with plugging a given vif.
>
> Here are a few options:
>
> 1- The simplest approach seems to be polling for the status of the Neutron
> port to become Active. (This may lead to scalability issues but short of
> having a specific goal for scalability, it is not clear that will be the
> case.)
> 2- Alternatively, We could subscribe to the message queue and wait for
> such a port update event.
> 3- It was also suggested that we could use l2 agent extension to detect
> such an event but that seems to limit us to certain Neutron plugins and
> therefore not acceptable.
>
> I was wondering if there are other and better options.
>
> Best,
>
> Mohammad
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][neutron] VMWare NSX CI - voting on devstack changes long after plugin decomposition

2016-05-03 Thread Salvatore Orlando
There is a job which has been turned on again by mistake and I'm working on
ensuring it's put to sleep again (for good this time).

If you can avoid disabling the whole account it would be great as the same
credentials are used by the still-voting nova CI.

Cheers,
Salvatore

On 3 May 2016 at 10:47, Sean M. Collins  wrote:

> When the VMWare plugin was decomposed from the main Neutron tree (
> https://review.openstack.org/#/c/160463/) it appears that the CI system
> was left turned on.
>
>
> http://208.91.1.172/logs/neutron/168438/48/423669-large-ops/logs/q-svc.log.2016-05-03-085740
>
> 2016-05-03 09:21:00.577 21706 ERROR neutron plugin_class =
> self.load_class_for_provider(namespace, plugin_provider)
> 2016-05-03 09:21:00.577 21706 ERROR neutron   File
> "/opt/stack/neutron/neutron/manager.py", line 145, in
> load_class_for_provider
> 2016-05-03 09:21:00.577 21706 ERROR neutron raise
> ImportError(_("Plugin '%s' not found.") % plugin_provider)
> 2016-05-03 09:21:00.577 21706 ERROR neutron ImportError: Plugin
> 'neutron.plugins.vmware.plugin.NsxPlugin' not found.
>
>
> I don't know the criteria for when this specific CI job is run, I appear
> to be the only one triggering it for a . rather long time
>
> http://paste.openstack.org/show/495994/
>
> So, it's still voting on DevStack changes but I think we probably should
> revoke that.
>
> --
> Sean M. Collins
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova][oslo] Common backoff & timeout utils

2016-04-21 Thread Salvatore Orlando
On 21 April 2016 at 16:54, Boden Russell  wrote:

> On 4/20/16 3:29 PM, Doug Hellmann wrote:
> > Yes, please, let's try to make that work and contribute upstream if we
> > need minor modifications, before we create something new.
>
> We can leverage the 'retrying' module (already in global requirements).
> It lacks a few things we need, but those can be implemented using its
> existing "hooks" today, or, working with the module owner(s) to push a
> few changes that we need (the later probably provides the "greatest good").
>

Retrying (even if mostly a 1-man effort) already has a history of
contribution from different sources, including a few OpenStack contributors
as well.
It hasn't had many commits in the past 12 months, but this does not mean
new PRs won't be accepted.
Starting a new library for something like this really feels like NIH.

As for hooks vs contributions this really depends on what you need to add.
Can you share more details on the "few things we need" that retrying is
lacking?
(and I apologise if you shared them earlier in this thread - I did not read
all of it)


>
> Assuming we'll leverage 'retrying', I was thinking the initial goals
> here are:
> (a) Ensure 'retrying' supports the behaviors we need for our usages in
> neutron + nova (see [1] - [5] on my initial note) today. Implementation
> details TBD.
> (b) Implement a "Backing off RPC client" in oslo, inspired by [1].
>

Do you think oslo_messaging would be a good target? Or do you think it
should go somewhere else?


> (c) Update nova + neutron to use the "common implementation(s)" rather
> than 1-offs.
>
> This sounds fun and I'm happy to take it on. However, I probably won't

make much progress until after the summit for obvious reasons. I'll plan
> to lead with code, if a RFE/spec/other is needed please let me know.


> Additional comments welcomed.
>
>
> Thanks
>
> [1] https://review.openstack.org/#/c/280595
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project] [all] Quotas and the need for reservation

2016-04-14 Thread Salvatore Orlando
On 12 April 2016 at 15:48, Andrew Laski  wrote:

>
>
> On Tue, Apr 5, 2016, at 09:57 AM, Ryan McNair wrote:
> > >It is believed that reservation help to to reserve a set of resources
> > >beforehand and hence eventually preventing any other upcoming request
> > >(serial or parallel) to exceed quota if because of original request the
> > >project might have reached the quota limits.
> > >
> > >Questions :-
> > >1. Does reservation in its current state as used by Nova, Cinder,
> Neutron
> > >help to solve the above problem ?
> >
> > In Cinder the reservations are useful for grouping quota
> > for a single request, and if the request ends up failing
> > the reservation gets rolled back. The reservations also
> > rollback automatically if not committed within a certain
> > time. We also use reservations with Cinder nested quotas
> > to group a usage request that may propagate up to a parent
> > project in order to manage commit/rollback of the request
> > as a single unit.
>

Neutron recently introduced reservations.
Without reservations it was theoretically possible for a tenant to achieve
n times the amount of resources granted by the quota, where n is the number
of workers or distinct server instances.
More informations are available in [1] and [2]


> >
> > >
> > >2. Is it consistent, reliable ?  Even with reservation can we run into
> > >in-consistent behaviour ?
>
>
> > Others can probably answer this better, but I have not
> > seen the reservations be a major issue. In general with
> > quotas we're not doing the check and set atomically which
> > can get us in an inconsistent state with quota-update,
> > but that's unrelated to the reservations.
>

I do not have any news of bugs, nor do I have any know issue that might
affect consistency of the reservation system.
One know weakness has to do with galera clusters - as the reservation
system uses the update lock which is pointless in this case.
Neutron handles the resulting write-set certification failure retrying the
operation, which is quite expensive.
There were already proposals in the nova space to implement a lock-free CAS
algorithm for reservations, but since then I've lost track of developments
in the area.



> >
> > >
> > >3. Do we really need it ?
> > >
> >
> > Seems like we need *some* way of keeping track of usage
> > reserved during a particular request and a way to easily
> > roll that back at a later time. I'm open to alternatives
> > to reservations, just wondering what the big downside of
> > the current reservation system is.
>

Like most things either one proactively ensures a desired condition is met,
or reacts when that condition is not met anymore.
This means that without reservation - eg: optimistic enforcement -
corrective steps must be taken after committing the transaction
that sent the resource over quota. This is completely ok in my opinion. For
instance if taking corrective steps has a cost of 5 and
creating/committing a reservation has a cost of 2, the reactive approach is
convenient if less than 1 request over 3 sends a resource
over quota (note: I've made the numbers up, I just wanted to make a point
that reacting rather than being proactive can be convenient).

However, for Neutron the reactive approach simply won't work because
Neutron leaves a certain degree of freedom to plugins, and several plugins
operate on the backend before committing the DB transaction (I know it's
probably not ok, but if we give them freedom to do so then we cannot
complain I guess). In that case the rollback will be very expensive and it
cannot be a simple DB operation as it has to involve the backend as well.


>
> Jay goes into it a little bit in his response to another quota thread
> http://lists.openstack.org/pipermail/openstack-dev/2016-March/090560.html
> and I share his thoughts here.
>
> With a reservation system you're introducing eventual consistency into
> the system rather than being strict because reservations are not tied to
> a concrete thing. You can't do a point in time check of whether the
> reserved resources are going to eventually be used if something happens
> like a service restart and a request is lost. You have to have no
> activity for the duration of the expiration time to let things settle
> before getting a real view of quota usages.
>

That is true. This is a problem in Neutron that I would like to address too.


>
> Instead if you tie quota usage to the resource records then you can
> always get a view of what's actually in use.
>

Yup, but a reservation and current usage are two different things, aren't
they?


>
> One thing that should probably be clarified in all of these discussion
> is what exactly is the quota on. I see two answers: the quota is against
> the actual resource usage, or the quota is against the records tracking
> usage. Since we currently track quotas with a reservation system I think
> it's fair to say that we're not actually tracking against resource like
> disk/RAM/CPU being in use. I would

Re: [openstack-dev] Nova quota statistics counting issue

2016-04-14 Thread Salvatore Orlando
For what is worth neutron employs "resource trackers" which conceptually do
something similar to nova quota usage statistics.
Before starting any transaction that can potentially change usage for a
given resource, the quota enforcement mechanism checks for a "dirty" marker
on the resource tracker.
If that marker is present, usage data for that resource are calculated from
the DB table for the resource. If not, current usage is employed for quota
enforcement and the "dirty" flag is set.

This means that if the process dies in the middle of a transaction, the
next transaction will rebuild the correct usage count from the DB.

Salvatore


On 14 April 2016 at 14:08, Timofei Durakov  wrote:

> Hi,
>
> I think it would be ok to store persistently quota details on compute
> side, as was discussed during mitaka mid-cycle[1] for migrations[2]. So if
> compute service fails we could restore state and update quota after compute
> restart.
>
> Timofey
>
> [1] - https://etherpad.openstack.org/p/mitaka-nova-priorities-tracking
> [2] - https://review.openstack.org/#/c/291161/5/nova/compute/background.py
>
>
>
>
> On Wed, Apr 13, 2016 at 7:27 PM, Dmitry Stepanenko <
> dstepane...@mirantis.com> wrote:
>
>> Hi Team,
>>
>> I worked on nova quota statistics issue (
>> https://bugs.launchpad.net/nova/+bug/1284424) happenning when nova-*
>> processes are restarted during removing instances and was able to reproduce
>> it. For repro I used devstack and started nova-api and nova-compute in
>> separate screen windows. For killing them I used ctrl+c. As I found this
>> issue happened if nova-* processes are killed after instance was deleted
>> but right before quota commit procedure finishes.
>>
>> We discussed these results with Markus Zoeller and decided that even
>> though killing nova processes is a bit exotic event, this still should be
>> fixed because quotas counting affects billing and very important for us.
>>
>> So, we need to introduce some mechanism that will prevent us from
>> reaching inconsistent states in terms of quotas. In other words, this
>> mechanism should work in such a way that both instance create/remove
>> operation and quota usage recount operation happened or not happened
>> together.
>>
>> Any ideas how to do that properly?
>>
>> Kind regards,
>> Dmitry
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Floating IPs and Public IPs are not equivalent

2016-04-06 Thread Salvatore Orlando
Hey! This sounds like bike-shedding & yak-shaving... totally my thing!

It is true that the Neutron model currently kind of forces a two-level
topology, with the external network being a sort of special case.
Regardless, this does not mean you cannot assign directly public IPs to
your instances - Neutron routers also work without NAT.

Shall we start a discussion on the evils of NAT now?
To me is one of those things like landline telephones. You don't really
need them, you know how to do without them, but for some reason you keep
using them and perceiving them as a fundamental service.

As for the issue Kevin pointed out, that's a limitation of the current
reference implementation that if overcome will probably simplify the
Neutron control plane as well.

Salvatore

On 2 April 2016 at 00:05, Kevin Benton  wrote:

> The main barrier to this is that we need to stop using the
> 'external_network_bridge = br-ex' option for the L3 agent and define a
> bridge mapping on the L2 agent. Otherwise the external network is treated
> as a special case and the VMs won't actually be able to get wired into the
> external network.
>
> On Thu, Mar 31, 2016 at 12:58 PM, Sean Dague  wrote:
>
>> On 03/31/2016 01:23 PM, Monty Taylor wrote:
>> > Just a friendly reminder to everyone - floating IPs are not synonymous
>> > with Public IPs in OpenStack.
>> >
>> > The most common (and growing, thank you to the beta of the new
>> > Dreamcompute cloud) configuration for Public Clouds is directly assign
>> > public IPs to VMs without requiring a user to create a floating IP.
>> >
>> > I have heard that the require-floating-ip model is very common for
>> > private clouds. While I find that even stranger, as the need to run NAT
>> > inside of another NAT is bizarre, it is what it is.
>> >
>> > Both models are common enough that pretty much anything that wants to
>> > consume OpenStack VMs needs to account for both possibilities.
>> >
>> > It would be really great if we could get the default config in devstack
>> > to be to have a shared direct-attached network that can also have a
>> > router attached to it and provider floating ips, since that scenario
>> > actually allows interacting with both models (and is actually the most
>> > common config across the OpenStack public clouds)
>>
>> If someone has the the pattern for what that config looks like,
>> especially if it could work on single interface machines, that would be
>> great.
>>
>> The current defaults in devstack are mostly there for legacy reasons
>> (and because they work everywhere), and for activation energy to getting
>> a new robust work everywhere setup.
>>
>> -Sean
>>
>> --
>> Sean Dague
>> http://dague.net
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project] [all] Quotas -- service vs. library

2016-03-19 Thread Salvatore Orlando
I'm not sure if this was mentioned already throughout the thread, however
as I've been working a bit on quotas in the past I might have some
additional information:

- Looking at quotas it is worth distinguishing between management (eg::
resource limits per tenant and/or users), and enforcement (eg.: can the
bakery service give me 4 cookies or did I already eat too many?)
  While for the reasons listed throughout this thread the latter should
really happen in the same context where the request is going to served,
quota management instead might be its own service, or however being done in
a common endpoint for all OpenStack resources.
- As far as quota enforcement is concerned, Dims already shared all the
relevant links. You might be already aware that we had a consensus around a
library, but hit a bit of a blocker on the fact that the library should've
introduced db model changes (at the time I devised a massive hack disguised
as abstraction around it). Considering alembic advancements (are we all
using alembic aren't we?) this should not be anymore an issue. I really
would love to have a library that does quota enforcement.
- It has also been raised a good point about securing a chunk of resources
across project, that is also related to John's point about business
quotas... I'm not sure it is necessary, but Blazar [1] kind of achieves
this - even if it was conceived with different purposes.

Salvatore

[1] https://wiki.openstack.org/wiki/Blazar


On 16 March 2016 at 18:27, John Dickinson  wrote:

> There are two types of quotas you may want to enforce in an OpenStack
> project: technical and business.
>
> Technical quotas are things that are hard limits of the system based on
> either actual resources available or protecting the system itself. For
> example, you can't provision a 2TB volume if you only have 1TB of capacity
> available. Similarly, you may want to ratelimit a user to a certain number
> of operations per second in order to keep the system usable by every user.
>
> These sort of quotas should absolutely stay in the realm of each
> individual project. And, for example, if Trove needs to provision a Cinder
> volume but that fails, it's Trove's responsibility for handling that
> elegantly.
>
> Business quotas are different. This is stuff like "a user is allowed to
> provision 1TB of Cinder per Nova compute unit that is provisioned" or "a
> user can provision 1Gb of network capacity per 200TB of data stored in
> Swift". Simpler rules that don't have cross-project dependencies are
> possible too (eg "A user can have no more than 3 compute instances" or "a
> user can have no more than 100k objects or 500TB stored in Swift").
> Oftentimes, these business quotas will be tied in to (or dependent on)
> other product-specific tools like billing or CRM systems.
>
> These business quotas should have a common rules engine in an OpenStack
> deployment. I've long thought that this sort of quota enforcement is an
> authZ decision (i.e. Keystone), but perhaps it's in some other project
> (Congress?). The hard part is that if it's in a central place, that service
> has to be enormously scalable. Specifically, it has to be able to handle
> the aggregate request rate load of every service it is enforcing quotas on.
>
> If we end up with an OpenStack project that is doing centralized business
> quotas, you've got the start of building an ERP system (
> https://en.wikipedia.org/wiki/Enterprise_resource_planning). Frankly, I
> don't think we should be doing that. It's outside of our scope of building
> cloud infrastructure software.
>
> However, we should be all about fixing any problems any individual project
> has about handling technical quotas. That work should stay within its
> respective project. There's no need to consolidate or combine
> project-specific resource management because they happen to all be called
> "quotas".
>
> --John
>
>
>
>
> On 15 Mar 2016, at 23:25, Nikhil Komawar wrote:
>
> > Hello everyone,
> >
> > tl;dr;
> > I'm writing to request some feedback on whether the cross project Quotas
> > work should move ahead as a service or a library or going to a far
> > extent I'd ask should this even be in a common repository, would
> > projects prefer to implement everything from scratch in-tree? Should we
> > limit it to a guideline spec?
> >
> > But before I ask anymore, I want to specifically thank Doug Hellmann,
> > Joshua Harlow, Davanum Srinivas, Sean Dague, Sean McGinnis and  Andrew
> > Laski for the early feedback that has helped provide some good shape to
> > the already discussions.
> >
> > Some more context on what the happenings:
> > We've this in progress spec [1] up for providing context and platform
> > for such discussions. I will rephrase it to say that we plan to
> > introduce a new 'entity' in the Openstack realm that may be a library or
> > a service. Both concepts have trade-offs and the WG wanted to get more
> > ideas around such trade-offs from the larger community.
> >
> > Serv

Re: [openstack-dev] [Neutron] RBAC: Fix port query and deletion for network owner

2016-03-19 Thread Salvatore Orlando
Indeed the VMware plugins were not using resource tracking (they know that
my code should not be trusted!)

I think this bears however another question that we need to answer... it is
likely that some change broke quota enforcement for plugins which do not
use usage tracking.
When I developed reservations & usage tracking we made an assumption that
plugins should not be forced to use usage tracking. If they did not, the
code will fallback to the old logic which just executed a count query.

If we want to make usage tracking mandatory I'm fine with that, but we
first need to make sure that every plugin enables it for every resource it
handles.

Salvatore

On 17 March 2016 at 12:41, Gary Kotton  wrote:

> Thanks!
>
> Much appreciated. Will check
>
> From: Kevin Benton 
> Reply-To: OpenStack List 
> Date: Thursday, March 17, 2016 at 1:09 PM
> To: OpenStack List 
> Subject: Re: [openstack-dev] [Neutron] RBAC: Fix port query and deletion
> for network owner
>
> After reviewing your logs[1], it seems that quotas are not working
> correctly in your plugin. There are no statements about tenants being
> marked dirty, etc.
>
> I think you are missing the quota registry setup code in your plugin init.
> Here is the ML2 example:
> https://github.com/openstack/neutron/blob/44ef44c0ff97d5b166d48d2ef93feafa9a0f7ea6/neutron/plugins/ml2/plugin.py#L167-L173
> 
>
>
>
> http://208.91.1.172/logs/neutron/293483/1/check-tempest-vmware-nsx-v3/q-svc.log.txt.gz
> 
>
> On Thu, Mar 17, 2016 at 1:30 AM, Gary Kotton  wrote:
>
>> Hi,
>> The review https://review.openstack.org/#/c/255285/ breaks our CI. Since
>> this has landed we are getting failed tests with the:
>> "Details: {u'message': u"Quota exceeded for resources: ['port'].",
>> u'type': u'OverQuota', u'detail': u’’}"
>> When I revert the patch and run our CI without it the tests pass. Is
>> anyone else hitting the same or a similar issue?
>> I think that for Mitaka we need to revert this patch
>> Thanks
>> Gary
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Segments, subnet types, and IPAM

2016-03-11 Thread Salvatore Orlando
Some thoughts inline.

Salvatore

On 11 March 2016 at 23:15, Carl Baldwin  wrote:

> Hi,
>
> I have started to get into coding [1] for the Neutron routed networks
> specification [2].
>
> This spec proposes a new association between network segments and
> subnets.  This affects how IPAM needs to work because until we know
> where the port is going to land, we cannot allocate an IP address for
> it.  Also, IPAM will need to somehow be aware of segments.  We have
> proposed a host / segment mapping which could be transformed to a host
> / subnet mapping for IPAM purposes.
>
> I wanted to get the opinion of folks like Salvatore, John Belamaric,
> and you (if you interested) on this.  How will this affect the
> interface to pluggable IPAM and how can pluggable implementations can
> accommodate this change.  Obviously, we wouldn't require
> implementations to support it but routed networks wouldn't be very
> useful without it.  So, those implementations would not be compatible
> when routed networks are deployed.
>

I think it is ok to augment the IPAM interface. As any API, it needs to
evolve.
I don't think we have a story for its versioning; therefore I reckon that
the simplest way to achieve this would be adding a new method for
segment-aware IPAM, that only drivers supporting routing networks will be
required to implement.



>
> Another related topic was brought up in the recent Neutron mid-cycle.
> We talked about adding a service type attribute to to subnets.  The
> reason for this change is to allow operators to create special subnets
> on a network to be used only by certain kinds of ports.  For example,
> DVR fip namespace gateway ports burn a public IP for no good reason.
> This new feature would allow operators to create a special subnet in
> the network with private addressing only to be used by these ports.
>
> Another example would give operators the ability to use private
> subnets for router external gateway ports if shared SNAT is not needed
> or doesn't need to use public IPs.
>
> These are two ways in which subnets are taking on extra
> characteristics which distinguish them from other subnets on the same
> network.  That is why I lumped them together in to one thread.
>

I wonder if we could satisfy this requirement with tags - as it seems these
subnets are anyway operator-owned you should probably not worry about
regular tenants fiddling with them, and therefore the "helper" subnet
needed for the fip namespace could just be tagged to the purpose.


>
> Carl
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][tempest] Timestamp service extension breaks CI

2016-03-07 Thread Salvatore Orlando
On 7 March 2016 at 10:54, Gary Kotton  wrote:

> There are a number of issues here:
>
>1. The create returns additional values, for example the binding:vnic_type,
>whilst the get does not
>
> This is probably a consequence of fixing the behaviour mismatch between
create and get.


>
>1. We have some unit tests that we need to change (I guess), that
>check function parameters. An example for this is the network passed to a
>method. With the extra extensions this is now changed. In addition to this
>the create and the get order of the parameters is different
>
> Fixing those unit tests should not be a big deal. We can assert only on
the key we wants to validate and not on the whole call.



> Thanks
> Gary
>
> From: Kevin Benton 
> Reply-To: OpenStack List 
> Date: Monday, March 7, 2016 at 11:45 AM
>
> To: OpenStack List 
> Subject: Re: [openstack-dev] [Neutron][tempest] Timestamp service
> extension breaks CI
>
> But that's the whole point of doing the read after the create in the
> plugin. As long as you read after all db changes and call the dict extend
> function, it should be the same.
>
> As far as order goes, python doesn't guarantee order on dictionary keys.
> Or did I misinterpret what you meant by order?
> On Mar 7, 2016 01:41, "Gary Kotton"  wrote:
>
>> Another issue that we have with the read at create is that the dictionary
>> returned is not the same as the one returned when the is a get for the
>> specific resource. The dictionary is also not in the same order.
>>
>> This is currently breaking our unit tests… By that is just another side
>> issue
>>
>> From: Kevin Benton 
>> Reply-To: OpenStack List 
>> Date: Monday, March 7, 2016 at 11:23 AM
>> To: OpenStack List 
>> Subject: Re: [openstack-dev] [Neutron][tempest] Timestamp service
>> extension breaks CI
>>
>> Right, it can't be done in the base right now because core plugins make
>> DB changes after the base plugin has been called. These changes include the
>> initial  create processing of many of the extensions so we can't call the
>> extend_dict functions before the data many of the registered hooks are
>> looking for even exists.
>>
>> So unfortunately right now it is the responsibility of the plugin to
>> extend the result after all of the DB work is done, not just the base
>> plugin stuff. If a plugin doesn't do it, the responses from that plugin's
>> create calls will not be correct. It was only recently when we started
>> adding API tests that check create responses for extensions that this bug
>> became apparent.
>>
>> I agree that the extra read right now sucks and it will be worth fixing
>> in Newton. Calling the dictionary extension processing outside of the
>> plugin and placing it somewhere in the core before returning the API
>> response may be possible, but the difficult part is getting the DB object
>> to pass to the hooks without an additional read since plugins only return
>> dicts.
>> On Mar 7, 2016 01:06, "Gary Kotton"  wrote:
>>
>>> I do not think that this is a bug in the plugin. Why are we not doing
>>> the changes in the base class (unless that is not possible). Having an
>>> extra read when a resources is created seems like a little of an overkill.
>>> I understand that it is what is done at the moment.
>>> I think that at the summit we should try and discuss how we can manage
>>> extensions better. Maybe the time has even come for us to consider the V3
>>> neutron API and to make all of the ‘default core services’ as part of the
>>> official API. So we will not have to do certain hacks to get the plugins to
>>> work.
>>>
>>>
>>> From: Kevin Benton 
>>> Reply-To: OpenStack List 
>>> Date: Sunday, March 6, 2016 at 11:27 PM
>>> To: OpenStack List 
>>> Subject: Re: [openstack-dev] [Neutron][tempest] Timestamp service
>>> extension breaks CI
>>>
>>> Keep in mind that fix for ML2 is the correct behavior, not a workaround.
>>> It was not including extension data in create calls so there was an API
>>> difference between a create and a get/update of the same object. It's now
>>> calling the extensions to let them populate their fields of the dict.
>>>
>>> If you're plugin does not exhibit the correct behavior in this case, I
>>> would just disable the test in question because it sounds like a bug in the
>>> plugin, not the test. It's reasonable to expect the timestamps that will be
>>> visible on every other API call to also be visible in create calls.
>>> Hi,
>>> Gal Sagie pointed me to patch in ML2 and OVN that address this by
>>> re-reading the networks and ports to ensure that the information is read.
>>> For those interested and whom it affects please see:
>>> ML2 - https://review.openstack.org/#/c/276219/
>>> *OVN - https://review.openstack.org/#/c/277844/
>>> *
>>>
>>> Thanks
>>> Gary
>>>
>>> From: Gary Kotton 
>>> Reply-To: OpenStack List 
>>> Date: Sunday, March 6, 2016 at 4:04 PM
>>> To: OpenStack List 
>>> Subject: [openstack-dev] [Neutron][tempest

Re: [openstack-dev] [neutron] - Changing the Neutron default security group rules

2016-03-03 Thread Salvatore Orlando
On 3 March 2016 at 10:38, Ihar Hrachyshka  wrote:

> Kevin Benton  wrote:
>
> Hi,
>>
>> I know this has come up in the past, but some folks in the infra channel
>> brought up the topic of changing the default security groups to allow all
>> traffic.
>>
>> They had a few reasons for this that I will try to summarize here:
>> * Ports 'just work' out of the box so there is no troubleshooting to
>> eventually find out that ingress is blocked by default.
>> * Instances without ingress are useless so a bunch of API calls are
>> required to make them useful.
>> * Some cloud providers allow all traffic by default (e.g. Digital Ocean,
>> RAX).
>> * It violates the end-to-end principle of the Internet to have a
>> middle-box meddling with traffic (the compute node in this case).
>> * Neutron cannot be trusted to do what it says it's doing with the
>> security groups API so users want to orchestrate firewalls directly on
>> their instances.
>>
>>
>> So this ultimately brings up two big questions. First, can we agree on a
>> set of defaults that is different than the one we have now; and, if so, how
>> could we possibly manage upgrades where this will completely change the
>> default filtering for users using the API?
>>
>
> No. Such a change may expose existing users to breaches.


Indeed. Even if the default are made discoverable via API changing them
will trigger upgrade mayhem.
API consumers will need to be aware that security group defaults might
differ across deployment first.
Now if we had a versioned API... but I won't go back there. Maybe just do
another extension, and all hail API evolution via extensions.


>
>
>
>> Second, would it be acceptable to make this operator configurable? This
>> would mean users could receive different default filtering as they moved
>> between clouds.
>>
>
> While I am not happy that OpenStack cloud behaviour drift between setups;
> I accept that’s where we already are, having some clouds redefining default
> rules.
>

> Considering reality we are already in, we could probably introduce
> configurable, API discoverable default rules.
>
> If we go this route, I believe we should discourage feature usage by
> writing certification tests that validate those rules are *not* modified
> for any setup that claims DefCore compatibility.
>
> Now, once we have it, it will be the user choice whether they want to
> complicate their orchestration code to deal with incompatibilities, or they
> just vote for DefCore compliant cloud.


So it seems you do not like the idea after all!
It would be interesting to see whether the DefCore committee reckons there
should a test verifying the enforcement of a "canonical" default security
group.
I honestly do not have an opinion in regard, but I would feel quite
disappointed if , for instance, my OpenStack implementation would not be
allowed to use the OpenStack trademark because I'm allowing users to ssh
into their floating IPs.

Salvatore


>
> Ihar
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [ipam] Migration to pluggable IPAM

2016-02-12 Thread Salvatore Orlando
On 11 February 2016 at 20:17, John Belamaric 
wrote:

>
> On Feb 11, 2016, at 12:04 PM, Armando M.  wrote:
>
>
>
> On 11 February 2016 at 07:01, John Belamaric 
> wrote:
>
>>
>>
>>
>> It is only internal implementation changes.
>>
>
> That's not entirely true, is it? There are config variables to change and
> it opens up the possibility of a scenario that the operator may not care
> about.
>
>
>
> If we were to remove the non-pluggable version altogether, then the
> default for ipam_driver would switch from None to internal. Therefore,
> there would be no config file changes needed.
>

I think this is correct.
Assuming the migration path to Neutron will include the data transformation
from built-in to pluggable IPAM, do we just remove the old code and models?
On the other hand do you think it might make sense to give operators a
chance to rollback - perhaps just in case some nasty bug pops up?
What's the team level of confidence in the robustness of the reference IPAM
driver?

Salvatore



>
>
> John
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [ipam] Migration to pluggable IPAM

2016-02-11 Thread Salvatore Orlando
The difference lies in the process in my opinion.
If the switch is added into the migration path then we will tell operators
when to switch.
I was suggesting doing it manual because we just don't know if every
operator is happy about doing the switch when upgrading to Newton, but
perhaps it is just me over-worrying about operator behaviour.

The other aspect is the deprecation process. If you add the switch into the
DB migration path then the whole deprecation becomes superseded as the old
IPAM logic should be abandoned immediately after that. But perhaps the
other way of looking at it is that we should make an exception in the
deprecation process.

Salvatore

On 11 February 2016 at 00:19, Carl Baldwin  wrote:

> On Thu, Feb 4, 2016 at 8:12 PM, Armando M.  wrote:
> > Technically we can make this as sophisticated and seamless as we want,
> but
> > this is a one-off, once it's done the pain goes away, and we won't be
> doing
> > another migration like this ever again. So I wouldn't over engineer it.
>
> Frankly, I was worried that going the other way was over-engineering
> it.  It will be more difficult for us to manage this transition.
>
> I'm still struggling to see what makes this particular migration
> different than other cases where we change the database schema and the
> code a bit and we automatically migrate everyone to it as part of the
> routine migration.  What is it about this case that necessitates
> giving the operator the option?
>
> Carl
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [ipam] Migration to pluggable IPAM

2016-02-05 Thread Salvatore Orlando
On 5 February 2016 at 17:58, Neil Jerram  wrote:

> On 05/02/16 16:31, Pavel Bondar wrote:
> > On 05.02.2016 12:28, Salvatore Orlando wrote:
> >>
> >>
> >> On 5 February 2016 at 04:12, Armando M.  >> <mailto:arma...@gmail.com>> wrote:
> >>
> >>
> >>
> >> On 4 February 2016 at 08:22, John Belamaric
> >> <<mailto:jbelama...@infoblox.com>jbelama...@infoblox.com> wrote:
> >>
> >>
> >> > On Feb 4, 2016, at 11:09 AM, Carl Baldwin  <mailto:c...@ecbaldwin.net>> wrote:
> >> >
> >> > On Thu, Feb 4, 2016 at 7:23 AM, Pavel Bondar <
> pbon...@infoblox.com <mailto:pbon...@infoblox.com>> wrote:
> >> >> I am trying to bring more attention to [1] to make final
> decision on
> >> >> approach to use.
> >> >> There are a few point that are not 100% clear for me at this
> point.
> >> >>
> >> >> 1) Do we plan to switch all current clouds to pluggable ipam
> >> >> implementation in Mitaka?
>
> I possibly shouldn't comment at all, as I don't know the history, and
> wasn't around when the fundamental design decisions here were being made.
>
> However, it seems a shame to me that this was done in a way that needs a
> DB migration at all.  (And I would have thought it possible for the
> default pluggable IPAM driver to use the same DB state as the
> non-pluggable IPAM backend, given that it is delivering the same
> semantics.)  Without that, I believe it should be a no-brainer to switch
> unconditionally to the pluggable IPAM backend.
>

This was indeed the first implementation attempt that we made, but it
failed spectacularly as we wrestled with different foreign key
relationships in the original and new model.
Pavel has all the details, but after careful considerations we decided to
adopt a specular model with different db tables.


>
> Sorry if that's unhelpful...
>
> Neil
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [ipam] Migration to pluggable IPAM

2016-02-05 Thread Salvatore Orlando
On 5 February 2016 at 04:12, Armando M.  wrote:

>
>
> On 4 February 2016 at 08:22, John Belamaric 
> wrote:
>
>>
>> > On Feb 4, 2016, at 11:09 AM, Carl Baldwin  wrote:
>> >
>> > On Thu, Feb 4, 2016 at 7:23 AM, Pavel Bondar 
>> wrote:
>> >> I am trying to bring more attention to [1] to make final decision on
>> >> approach to use.
>> >> There are a few point that are not 100% clear for me at this point.
>> >>
>> >> 1) Do we plan to switch all current clouds to pluggable ipam
>> >> implementation in Mitaka?
>> >
>> > I think our plan originally was only to deprecate the non-pluggable
>> > implementation in Mitaka and remove it in Newton.  However, this is
>> > worth some more consideration.  The pluggable version of the reference
>> > implementation should, in theory, be at parity with the current
>> > non-pluggable implementation.  We've tested it before and shown
>> > parity.  What we're missing is regular testing in the gate to ensure
>> > it continues this way.
>> >
>>
>> Yes, it certainly should be at parity, and gate testing to ensure it
>> would be best.
>>
>> >> yes -->
>> >> Then data migration can be done as alembic_migration and it is what
>> >> currently implemented in [2] PS54.
>> >> In this case during upgrade from Liberty to Mitaka all users are
>> >> unconditionally switched to reference ipam driver
>> >> from built-in ipam implementation.
>> >> If operator wants to continue using build-in ipam implementation it can
>> >> manually turn off ipam_driver in neutron.conf
>> >> immediately after upgrade (data is not deleted from old tables).
>> >
>> > This has a certain appeal to it.  I think the migration will be
>> > straight-forward since the table structure doesn't really change much.
>> > Doing this as an alembic migration would be the easiest from an
>> > upgrade point of view because it fits seamlessly in to our current
>> > upgrade strategy.
>> >
>> > If we go this way, we should get this in soon so that we can get the
>> > gate and others running with this code for the remainder of the cycle.
>> >
>>
>> If we do this, and the operator reverts back to the non-pluggable version,
>> then we will leave stale records in the new IPAM tables. At the very
>> least,
>> we would need a way to clean those up and to migrate at a later time.
>>
>> >> no -->
>> >> Operator is free to choose whether it will switch to pluggable ipam
>> >> implementation
>> >> and when. And it leads to no automatic data migration.
>> >> In this case operator is supplied with script for migration to
>> pluggable
>> >> ipam (and probably from pluggable ipam),
>> >> which can be executed by operator during upgrade or at any point after
>> >> upgrade is done.
>> >> I was testing this approach in [2] PS53 (have unresolved issues in it
>> >> for now).
>> >
>> > If there is some risk in changing over then this should still be
>> > considered.  But, the more I think about it, the more I think that we
>> > should just make the switch seamlessly for the operator and be done
>> > with it.  This approach puts a certain burden on the operator to
>> > choose when to do the migration and go through the steps manually to
>> > do it.  And, since our intention is to deprecate and remove the
>> > non-pluggable implementation, it is inevitable that they will have to
>> > eventually switch anyway.
>> >
>> > This also makes testing much more difficult.  If we go this route, we
>> > really should be testing both equally.  Does this mean that we need to
>> > set up a whole new job to run the pluggable implementation along side
>> > the old implementation?  This kind of feels like a nightmare to me.
>> > What do you think?
>> >
>>
>> Originally (as I mentioned in the meeting), I was thinking that we should
>> not automatically migrate. However, I see the appeal of your arguments.
>> Seamless is best, of course. But if we offer going back to non-pluggable,
>> (which I think we need to at this point in the Mitaka cycle), we probably
>> need to provide a script as mentioned above. Seems feasible, though.
>>
>>
>>
>>
> We're tackling more than one issue in this thread and I am having a hard
> time wrapping my head around it. Let me try to sum it all up.
>
> a) switching from non-pluggable to pluggable it's a matter of running a
> data migration + a config change
> b) We can either switch automatically on restart (option b1) or manually
> on operator command (b2)
> c) Do we make pluggable ipam default and when
> d) Testing the migration
> e) Deprecating the non-pluggable one.
>
> I hope we are all in agreement on bullet point a), because knowing the
> complexity of your problem is halfway to our solution.
>
> as for b) I think that manual migration is best for two reasons: 1) In HA
> scenarios, seamless upgrade (ie. on server restart) can be a challenge; 2)
> the operator must 'manually' change the driver, so he/she is very conscious
> of what he/she is doing and can take enough precautions should something go
> astray. Technically we can make this a

Re: [openstack-dev] [neutron][networking-calico] To be or not to be an ML2 mechanism driver?

2016-01-25 Thread Salvatore Orlando
I agree with Armando that at the end of the day user requirements should
drive these decisions.
I think you did a good job in listing what are the pro and the cons of
choosing standalone plugin rather than a ML2 driver.

The most important point you made, in my opinion, concerns the ability of
supporting multiple backends.
I find your analysis correct; however I might simplify it by saying that as
the Calico driver is probably unlikely to interact with any other mechanism
driver, then the remaining value of adopting ML2 is probably more a way to
re-use code and implement common Neutron "paradigms" - and as you wrote you
can still retain ML2's architecture even in a new plugin.

Further, it is also true what Ian wrote - even with a standalone plugin you
will still be constrained by entities which are meant to represent L2
constructs.

Salvatore



On 24 January 2016 at 23:45, Armando M.  wrote:

>
>
> On 22 January 2016 at 10:35, Neil Jerram 
> wrote:
>
>> networking-calico [1] is currently implemented as an ML2 mechanism
>> driver, but
>> I'm wondering if it might be better as its own core plugin, and I'm
>> looking for
>> input about the implications of that, or for experience with that kind of
>> change; and also for experience and understanding of hybrid ML2
>> networking.
>>
>> Here the considerations that I'm aware of:
>>
>> * Why change from ML2 to core plugin?
>>
>> - It could be seen as resolving a conceptual mismatch.
>> networking-calico uses
>>   IP routing to provide L3 connectivity between VMs, whereas ML2 is
>> ostensibly
>>   all about layer 2 mechanisms.  Arguably it's the Wrong Thing for a
>> L3-based
>>   network to be implemented as an ML2 driver, and changing to a core
>> plugin
>>   would fix that.
>>
>>   On the other hand, the current ML2 implementation seems to work fine,
>> and I
>>   think that the L2 focus of ML2 may be seen as traditional assumption
>> just
>>   like the previously assumed L2 semantics of neutron Networks; and it
>> may be
>>   that the scope of 'ML2' could and should be expanded to both L2- and
>> L3-based
>>   implementations, just as [2] is proposing to expand the scope of the
>> neutron
>>   Network object to encompass L3-only behaviour as well as L2/L3.
>>
>> - Some simplification of the required config.  A single 'core_plugin =
>> calico'
>>   setting could replace 'core_plugin = ml2' plus a handful of ML2
>> settings.
>>
>> - Code-wise, it's a much smaller change than you might imagine, because
>> the new
>>   core plugin can still derive from ML2, and so internally retain the ML2
>>   coding architecture.
>>
>> * Why stay as an ML2 driver?
>>
>> - Perhaps because of ML2's support for multiple networking
>> implementations in
>>   the same cluster.  To the extent that it makes sense, I'd like
>>   networking-calico networks to coexist with other networking
>> implementations
>>   in the same data center.
>>
>>   But I'm not sure to what extent such hybrid networking is a real
>> thing, and
>>   this is the main point on which I'd appreciate input.  In principle ML2
>>   supports multiple network Types and multiple network Mechanisms, but I
>> wonder
>>   how far that really works - or is useful - in practice.
>>
>>   Let's look at Types first.  ML2 supports multiple provider network
>> types,
>>   with the Type for each network being specified explicitly by the
>> provider API
>>   extension (provider:network_type), or else defaulting to the
>>   'external_network_type' ML2 config setting.  However, would a cloud
>> operator
>>   ever actually use more than one provider Type?  My understanding is that
>>   provider networks are designed to map closely onto the real network,
>> and I
>>   guess that an operator would also favour a uniform design there, hence
>> just
>>   using a single provider network Type.
>>
>>   For tenant networks ML2 allows multiple network Types to be configured
>> in the
>>   'tenant_network_types' setting.  However, if my reading of the code is
>>   correct, only the first of these Types will ever be used for a tenant
>> network
>>   - unless the system runs out of the 'resources' needed for that Type,
>> for
>>   example if the first Type is 'vlan' but there are no VLAN IDs left to
>> use.
>>   Is that a feature that is used in practice, within a given
>> deployment?  For
>>   exampe, to first use VLANs for tenant networks, then switch to
>> something else
>>   when those run out?
>>
>>   ML2 also supports multiple mechanism drivers.  When a new Port is being
>>   created, ML2 calls each mechanism driver to give it the chance to do
>> binding
>>   and connectivity setup for that Port.  In principle, if mechanism
>> drivers are
>>   present, I guess each one is supposed to look at some of the available
>> Port
>>   data - and perhaps the network Type - and thereby infer whether it
>> should be
>>   responsible for that Port, and so do the setup for it.  But I wonder if
>>   anyone runs a cloud where that really happens?  If so, ha

Re: [openstack-dev] [neutron][api] GET call with huge argument list

2016-01-21 Thread Salvatore Orlando
More inline,
Salvatore

On 20 January 2016 at 16:51, Shraddha Pandhe 
wrote:

> Thank you all for the comments.
>
> The client that we expect to call this API with thousands of network-ids
> is nova-scheduler.
>
> Since this call is happening in the middle of scheduling, we don't want to
> spend time in paginating or sending multiple requests. I have tens of
> thousands of networks and subnets in my test cluster right now and with
> that scale, the extension takes more than 2 seconds to return.
>

What percentage of this time is spent in the GET /v2.0/networks call?


> With multiple calls, scheduler will become very slow.
>

If the calls are serialized that is surely correct. As most production
neutron servers employ multiple workers the overhead of doing multiple
calls in parallel might however be tolerable.
I'd like to understand more about your use case. Here are some additional
questions

Is network-id the only attribute you can filter on?
Assuming Neutron provided tags in the API could you leverage those?
Why is not tenant-id a viable alternative?


>
> I agree that sending payload with GET is not recommended and most
> libraries just drop the payload for such cases.
>

Nevertheless, we're pretty much in control of that. We've already discussed
this, and doing so does not violate RFC7231, so it's ok from a protocol
perspective.
If needed, we can tweak the API request processing workflow for allowing
this.


>
>
>
> On Wed, Jan 20, 2016 at 2:27 PM, Salvatore Orlando  > wrote:
>
>> I tend to agree with Doug and Ryan's stance. If you need to pass 1000s of
>> network-id on a single request you're probably not doing things right on
>> the client side.
>> As Ryan suggested you can try and split the request in multiple requests
>> with acceptable URI lenght and send them in parallel; this will add some
>> overhead, but should work flawlessly.
>>
>> Once tags will be implemented you will be able to leverage those to
>> simplify your queries.
>>
>> Regarding GET requests with plenty of parameters, this discussion came up
>> on the mailing list a while ago [1]. A good proposal was made in that
>> thread but never formalised as an API-wg guideline; you consider submitting
>> a patch to the API-wg too.
>> Note however that Neutron won't be able to support it out of the box
>> considering its WSGI framework completely ignores request bodies on GET
>> requests.
>>
>> Salvatore
>>
>> [1]
>> http://lists.openstack.org/pipermail/openstack-dev/2015-November/078243.html
>>
>> On 20 January 2016 at 12:33, Ryan Brown  wrote:
>>
>>> So having a URI too long error is, in this case, likely an indication
>>> that you're requesting too many things at once.
>>>
>>> You could:
>>> 1. Request 100 at a time in parallel
>>> 2. Find a query that would give you all those networks & page through
>>> the reply
>>> 3. Page through all the user's networks and filter client-side
>>>
>>> How is the user supposed to be assembling this giant UUID list? I'd
>>> think it would be easier for them to specify a query (e.g. "get usage data
>>> for all my production subnets" or something).
>>>
>>>
>>> On 01/19/2016 06:59 PM, Shraddha Pandhe wrote:
>>>
>>>> Hi folks,
>>>>
>>>>
>>>> I am writing a Neutron extension which needs to take 1000s of
>>>> network-ids as argument for filtering. The CURL call is as follows:
>>>>
>>>> curl -i -X GET
>>>> 'http://hostname:port
>>>> /neutron/v2.0/extension_name.json?net-id=fffecbd1-0f6d-4f02-aee7-ca62094830f5&net-id=fffeee07-4f94-4cff-bf8e-a2aa7be59e2e'
>>>> -H "User-Agent: python-neutronclient" -H "Accept: application/json" -H
>>>> "X-Auth-Token: "
>>>>
>>>>
>>>> The list of net-ids can go up to 1000s. The problem is, with such large
>>>> url, I get the "Request URI too long" error. I don't want to update this
>>>> limit as proxies can have their own limits.
>>>>
>>>> What options do I have to send 1000s of network IDs?
>>>>
>>>> 1. -d '{}' is not a recommended option for GET call and wsgi Controller
>>>> drops the data part when routing the request.
>>>>
>>>> 2. Use POST instead of GET? I will need to write the get_
>>>> logic inside create_resource logi

Re: [openstack-dev] [neutron][api] GET call with huge argument list

2016-01-20 Thread Salvatore Orlando
I tend to agree with Doug and Ryan's stance. If you need to pass 1000s of
network-id on a single request you're probably not doing things right on
the client side.
As Ryan suggested you can try and split the request in multiple requests
with acceptable URI lenght and send them in parallel; this will add some
overhead, but should work flawlessly.

Once tags will be implemented you will be able to leverage those to
simplify your queries.

Regarding GET requests with plenty of parameters, this discussion came up
on the mailing list a while ago [1]. A good proposal was made in that
thread but never formalised as an API-wg guideline; you consider submitting
a patch to the API-wg too.
Note however that Neutron won't be able to support it out of the box
considering its WSGI framework completely ignores request bodies on GET
requests.

Salvatore

[1]
http://lists.openstack.org/pipermail/openstack-dev/2015-November/078243.html

On 20 January 2016 at 12:33, Ryan Brown  wrote:

> So having a URI too long error is, in this case, likely an indication that
> you're requesting too many things at once.
>
> You could:
> 1. Request 100 at a time in parallel
> 2. Find a query that would give you all those networks & page through the
> reply
> 3. Page through all the user's networks and filter client-side
>
> How is the user supposed to be assembling this giant UUID list? I'd think
> it would be easier for them to specify a query (e.g. "get usage data for
> all my production subnets" or something).
>
>
> On 01/19/2016 06:59 PM, Shraddha Pandhe wrote:
>
>> Hi folks,
>>
>>
>> I am writing a Neutron extension which needs to take 1000s of
>> network-ids as argument for filtering. The CURL call is as follows:
>>
>> curl -i -X GET
>> 'http://hostname:port
>> /neutron/v2.0/extension_name.json?net-id=fffecbd1-0f6d-4f02-aee7-ca62094830f5&net-id=fffeee07-4f94-4cff-bf8e-a2aa7be59e2e'
>> -H "User-Agent: python-neutronclient" -H "Accept: application/json" -H
>> "X-Auth-Token: "
>>
>>
>> The list of net-ids can go up to 1000s. The problem is, with such large
>> url, I get the "Request URI too long" error. I don't want to update this
>> limit as proxies can have their own limits.
>>
>> What options do I have to send 1000s of network IDs?
>>
>> 1. -d '{}' is not a recommended option for GET call and wsgi Controller
>> drops the data part when routing the request.
>>
>> 2. Use POST instead of GET? I will need to write the get_
>> logic inside create_resource logic for this to work. Its a hack, but
>> complies with HTTP standard.
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> --
> Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [Cinder] [Nova] [Neutron] Gathering quota usage data in Horizon

2015-12-18 Thread Salvatore Orlando
The point raised by Matt for Nova applies to Neutron as well.
Neutron does not have strict deadlines for blueprint approval; however even
if in theory it would still be possible to achieve this for Mitaka, it is
rather unlikely since the number of blueprints already in the pipeline is
way more than what can reasonably be implemented in this release cycle.

Anyway, it would be a matter of resuscitate the blueprint [1] and pretty
much rework it in light of the discussion we had around "usage APIs" with
the API working group [2]
I will be happy to assist with design and implementations. so if you have
any requirement from the horizon side, like the ability of filtering either
by tenant or resource just let me know.

Salvatore

[1] https://review.openstack.org/#/c/102199/
[2]
http://lists.openstack.org/pipermail/openstack-dev/2014-November/051152.html

On 18 December 2015 at 15:12, Matt Riedemann 
wrote:

>
>
> On 12/17/2015 2:40 PM, Ivan Kolodyazhny wrote:
>
>> Hi Timur,
>>
>> Did you try this Cinder API [1]?  Here [2] is cinderclient output.
>>
>>
>>
>> [1]
>>
>> https://github.com/openstack/python-cinderclient/blob/master/cinderclient/v2/quotas.py#L33
>> [2] http://paste.openstack.org/show/482225/
>>
>> Regards,
>> Ivan Kolodyazhny,
>> http://blog.e0ne.info/
>>
>> On Thu, Dec 17, 2015 at 8:41 PM, Timur Sufiev > > wrote:
>>
>> Hello, folks!
>>
>> I'd like to initiate a discussion of the feature request I'm going
>> to make on behalf of Horizon to every core OpenStack service which
>> supports Quota feature, namely Cinder, Nova and Neutron.
>>
>> Although all three services' APIs support special calls to get
>> current quota limitations (Nova and Cinder allows to get and update
>> both per-tenant and default cloud-wide limitations, Neutron allows
>> to do it only for per-tenant limitations), there is no special call
>> in any of these services to get current per-tenant usage of quota.
>> Because of that Horizon needs to get, say for 'volumes' quota, a
>> list of Cinder volumes in the current tenant and then just calculate
>> its length [1]. When there are really a lot of entities in tenant -
>> instances/volumes/security groups/whatever - all this calls sum up
>> and make rendering pages in Horizon much more slower than it could
>> be. Is it possible to provide special API calls to alleviate this?
>>
>> [1]
>>
>> https://github.com/openstack/horizon/blob/9.0.0.0b1/openstack_dashboard/usage/quotas.py#L350
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> I think Timur is asking for a way to filter on only certain resources for
> quota usage/limits, like volumes in cinder or instances in nova, rather
> than getting back all resource usage/limits per tenant.
>
> Is that correct, Timur?
>
> While it's possible to add this, I'm not sure how much time it's actually
> going to save in the DB query time to get the quota information for a
> tenant.
>
> Anyway, it's an API change so it would require a spec for nova which means
> we wouldn't be getting to that until at least N since we're in spec freeze
> for mitaka.
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [client][all][neutron] client option removal policy

2015-12-07 Thread Salvatore Orlando
The Neutron API dropped XML support quite some time ago.
Therefore specifying --request-format xml already produces an error.
Even if  this parameter is already vestigial and should be abruptly
removed. We don't know whether anyone is using it. For instance one could
have a set of scripts that explicitly use it just to make sure it never
switched without knowing to XML!

Deprecation first is therefore, in my opinion, always the recommended path.

Salvatore

On 7 December 2015 at 12:58, Akihiro Motoki  wrote:

> Hi,
>
> neutronclient is now dropping XML support and as a result
> "--request-format" option is no longer needed as JSON is the only format
> now.
>
> What is the recommended way for options no longer needed?
> Does bumping major version of CLI allow us to drop an option without
> deprecation?
>
> - Deprecate such option.
>   The option still exists with only one available choiceuntil the
> option is deleted.
>
> - Drop it without deprecation.
>   This breaks users who uses "--requiest-format json", but 'json' is the
> default
>   value and most users do not specify the option.
>
> Thanks,
> Akihiro
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Bug deputy process

2015-12-02 Thread Salvatore Orlando
I only have some historical, anecdotal, and rapidly waning memory of
previous releases.
Nevertheless my feeling is that the process has been a success so far.
In past times it would not have been a surprise if a bug fell under the
radar until that well known brownish matter hit the proverbial fan.

Also, only 17 bugs are in "new" status out of 373. Which means that - at
worst - only 4.6% of reported bugs have not yet been analysed by the team.
I reckon these numbers are rather impressive. Kudos to both the deputies
and most importantly to Armando who set up the process.

Salvatore


On 2 December 2015 at 19:49, Armando M.  wrote:

> Hi neutrinos,
>
> It's been a couple of months that the Bug deputy process has been in place
> [1,2]. Since the beginning of Mitaka we have collected the following
> statistics (for neutron and neutronclient):
>
> Total bug reports: 373
>
>- Fix committed: 144
>- Unassigned: 73
>   - New: 17
>   - Incomplete: 20
>   - Confirmed: 27
>   - Triaged: 6
>
>
> At first, it is clear that we do not fix issues nearly as fast as they
> come in, but at least we managed to keep the number of unassigned/unvetted
> bugs relatively small, so kudos to you all who participated in this
> experiment. I don't have data based on older releases, so I can't see
> whether we've improved or worsened, and I'd like to ask for feedback from
> the people who played with this first hand, especially on the amount of
> time that has taken them to do deputy duty for their assigned week.
>
>- ihrachys
>- regXboi
>- markmcclain
>- mestery
>- mangelajo
>- garyk
>- rossella_s
>- dougwig
>
> Many thanks,
> Armando
>
> [1] https://wiki.openstack.org/wiki/Network/Meetings#Bug_deputy
> [2]
> http://docs.openstack.org/developer/neutron/policies/bugs.html#neutron-bug-deputy
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-06 Thread Salvatore Orlando
More comments inline.
I shall stop trying being ironic (pun intended) in my posts.

Salvatore

On 5 November 2015 at 18:37, Kyle Mestery  wrote:

> On Thu, Nov 5, 2015 at 10:55 AM, Jay Pipes  wrote:
>
>> On 11/04/2015 04:21 PM, Shraddha Pandhe wrote:
>>
>>> Hi Salvatore,
>>>
>>> Thanks for the feedback. I agree with you that arbitrary JSON blobs will
>>> make IPAM much more powerful. Some other projects already do things like
>>> this.
>>>
>>
>> :( Actually, though "powerful" it also leads to implementation details
>> leaking directly out of the public REST API. I'm very negative on this and
>> would prefer an actual codified REST API that can be relied on regardless
>> of backend driver or implementation.
>>
>
> I agree with Jay here. We've had people propose similar things in Neutron
> before, and I've been against them. The entire point of the Neutron REST
> API is to not leak these details out. It dampens the strength of the
> logical model, and it tends to have users become reliant on backend
> implementations.
>

I see I did not manage to convey accurately irony and sarcasm in my
previous post ;)
The point was that thanks to a blooming number of extensions the Neutron
API is already hardly portable. Blob attributes (or dict attributes, or
key/value list attributes, or whatever does not have a precise schema) are
a nail in the coffin, and also violate the only tenet Neutron has somehow
managed to honour, which is being backend agnostic.
And the fact that the port binding extension is pretty much that is not a
valid argument, imho.
On the other hand, I'm all in for extending DB schema and driver logic to
suit all IPAM needs; at the end of the day that's what do with plugins for
all sort of stuff.



>
>
>>
>> e.g. In Ironic, node has driver_info, which is JSON. it also has an
>>> 'extras' arbitrary JSON field. This allows us to put any information in
>>> there that we think is important for us.
>>>
>>
>> Yeah, and this is a bad thing, IMHO. Public REST APIs should be
>> structured, not a Wild West free-for-all. The biggest problem with using
>> free-form JSON blobs in RESTful APIs like this is that you throw away the
>> ability to evolve the API in a structured, versioned way. Instead of
>> evolving the API using microversions, instead every vendor just jams
>> whatever they feel like into the JSON blob over time. There's no way for
>> clients to know what the server will return at any given time.
>>
>> Achieving consensus on a REST API that meets the needs of a variety of
>> backend implementations is *hard work*, yes, but it's what we need to do if
>> we are to have APIs that are viewed in the industry as stable,
>> discoverable, and reliably useful.
>>
>
> ++, this is the correct way forward.
>

Cool, but let me point out that experience has thought us that anything
that is a result of a compromise between several parties following
different agendas is bound to failure as it does not fully satisfy the
requirements of any stakeholder.
If these information are needed for making scheduling decisions based on
network requirements, then it makes sense to expose this information also
at the API layer (I assume there also plans for making the scheduler
*seriously* network aware). However, this information should have a
well-defined schema with no leeway for 'extensions; such schema can evolve
over time.


> Thanks,
> Kyle
>
>
>>
>> Best,
>> -jay
>>
>> Best,
>> -jay
>>
>> Hoping to get some positive feedback from API and DB lieutenants too.
>>>
>>>
>>> On Wed, Nov 4, 2015 at 1:06 PM, Salvatore Orlando
>>> mailto:salv.orla...@gmail.com>> wrote:
>>>
>>> Arbitrary blobs are a powerful tools to circumvent limitations of an
>>> API, as well as other constraints which might be imposed for
>>> versioning or portability purposes.
>>> The parameters that should end up in such blob are typically
>>> specific for the target IPAM driver (to an extent they might even
>>> identify a specific driver to use), and therefore an API consumer
>>> who knows what backend is performing IPAM can surely leverage it.
>>>
>>> Therefore this would make a lot of sense, assuming API portability
>>> and not leaking backend details are not a concern.
>>> The Neutron team API & DB lieutenants will be able to provide more
>>> input on this regard.
>>>
>>> In this case other approaches such as a vendor specific extension
>>> 

Re: [openstack-dev] [nova][api]

2015-11-06 Thread Salvatore Orlando
It makes sense to have a single point were response pagination is made in
API processing, rather than scattering pagination across Nova REST
controllers; unfortunately if I am not really able to comment how feasible
that would be in Nova's WSGI framework.

However, I'd just like to add that there is an approved guideline for API
response pagination [1], and if would be good if all these effort follow
the guideline.

Salvatore

[1]
https://specs.openstack.org/openstack/api-wg/guidelines/pagination_filter_sort.html

On 5 November 2015 at 03:09, Tony Breeds  wrote:

> Hi All,
> Around the middle of October a spec [1] was uploaded to add pagination
> support to the os-hypervisors API.  While I recognize the use case it
> seemed
> like adding another pagination implementation wasn't an awesome idea.
>
> Today I see 3 more requests to add pagination to APIs [2]
>
> Perhaps I'm over thinking it but should we do something more strategic
> rather
> than scattering "add pagination here".
>
> It looks to me like we have at least 3 parties interested in this.
>
> Yours Tony.
>
> [1] https://review.openstack.org/#/c/234038
> [2]
> https://review.openstack.org/#/q/message:pagination+project:openstack/nova-specs+status:open,n,z
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-04 Thread Salvatore Orlando
Arbitrary blobs are a powerful tools to circumvent limitations of an API,
as well as other constraints which might be imposed for versioning or
portability purposes.
The parameters that should end up in such blob are typically specific for
the target IPAM driver (to an extent they might even identify a specific
driver to use), and therefore an API consumer who knows what backend is
performing IPAM can surely leverage it.

Therefore this would make a lot of sense, assuming API portability and not
leaking backend details are not a concern.
The Neutron team API & DB lieutenants will be able to provide more input on
this regard.

In this case other approaches such as a vendor specific extension are not a
solution - assuming your granularity level is the allocation pool; indeed
allocation pools are not first-class neutron resources, and it is not
therefore possible to have APIs which associate vendor specific properties
to allocation pools.

Salvatore

On 4 November 2015 at 21:46, Shraddha Pandhe 
wrote:

> Hi folks,
>
> I have a small question/suggestion about IPAM.
>
> With IPAM, we are allowing users to have their own IPAM drivers so that
> they can manage IP allocation. The problem is, the new ipam tables in the
> database have the same columns as the old tables. So, as a user, if I want
> to have my own logic for ip allocation, I can't actually get any help from
> the database. Whereas, if we had an arbitrary json blob in the ipam tables,
> I could put any useful information/tags there, that can help me for
> allocation.
>
> Does this make sense?
>
> e.g. If I want to create multiple allocation pools in a subnet and use
> them for different purposes, I would need some sort of tag for each
> allocation pool for identification. Right now, there is no scope for doing
> something like that.
>
> Any thoughts? If there are any other way to solve the problem, please let
> me know
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Handling lots of GET query string parameters?

2015-11-04 Thread Salvatore Orlando
Inline,
Salvatore

On 4 November 2015 at 15:11, Cory Benfield  wrote:

>
> > On 4 Nov 2015, at 13:13, Salvatore Orlando 
> wrote:
> >
> > Regarding Jay's proposal, this would be tantamount to defining an API
> action for retrieving instances, something currently being discussed here
> [1].
> > The only comment I have is that I am not entirely surely whether using
> the POST verb for operations which do no alter at all the server
> representation of any object is in accordance with RFC 7231.
>
> It’s totally fine, so long as you define things appropriately. Jay’s
> suggestion does exactly that, and is entirely in line with RFC 7231.
>
> The analogy here is to things like complex search forms. Many search
> engines allow you to construct very complex search queries (consider
> something like Amazon or eBay, where you can filter on all kinds of
> interesting criteria). These forms are often submitted to POST endpoints
> rather than GET.
>
> This is totally fine. In fact, the first example from RFC 7231 Section
> 4.3.3 (POST) applies here: “POST is used for the following functions (among
> others): Providing a block of data […] to a data-handling process”. In this
> case, the data-handling function is the search function on the server.
>

I looked back at the RFC and indeed it does not state anywhere that a POST
operation is required to change somehow the state of any object, so the
approach is entirely fine from this aspect as well.


>
> The *only* downside of Jay’s approach is that the response cannot really
> be cached. It’s not clear to me whether anyone actually deploys a cache in
> this kind of role though, so it may not hurt too much.
>

I believe there will be not a great advantage from caching this kind of
responses, as cache hits would be very low anyway.


> Cory
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Handling lots of GET query string parameters?

2015-11-04 Thread Salvatore Orlando
Regarding Jay's proposal, this would be tantamount to defining an API
action for retrieving instances, something currently being discussed here
[1].
The only comment I have is that I am not entirely surely whether using the
POST verb for operations which do no alter at all the server representation
of any object is in accordance with RFC 7231.
A search API like the one pointed out by Julien is interesting; at first
glance I'm not able to comment on its RESTfulness - it definitely has
plenty of use cases and enables users to run complex queries; one possible
downside is that it increases the complexity of simple queries.

For the purpose of the Nova spec I think it might be ok to limit the
functionality to a "small number of instance ids" as expressed in the spec.
On the other hand how crazy it would be to limit the number of bytes in the
URL by allowing to specify contract form of instance UUIDs - in a way
similar to git commits?

[1] https://review.openstack.org/#/c/234994/

On 4 November 2015 at 13:17, Sean Dague  wrote:

> On 11/03/2015 05:45 AM, Julien Danjou wrote:
> > On Tue, Nov 03 2015, Jay Pipes wrote:
> >
> >> My suggestion was to add a new POST /servers/search URI resource that
> can take
> >> a request body containing large numbers of filter arguments, encoded in
> a JSON
> >> object.
> >>
> >> API working group, what thoughts do you have about this? Please add your
> >> comments to the Gerrit spec patch if you have time.
> >
> > FWIW, we already have an extensive support for that in both Ceilometer
> > and Gnocchi. It looks like a small JSON query DSL that we're able to
> > "compile" down to SQL Alchemy filters.
> >
> > A few examples are:
> >
> http://docs.openstack.org/developer/gnocchi/rest.html#searching-for-resources
> >
> > I've planed for a long time to move this code to a library, so if Nova's
> > interested, I can try to move that forward eagerly.
>
> I guess I wonder what the expected interaction with things like
> Searchlight is? Searchlight was largely created for providing this kind
> of fast access to subsets of resources based on arbitrary attribute search.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] should we open gate for per sub-project stable-maint teams?

2015-11-03 Thread Salvatore Orlando
This plan makes a lot of sense to me.
With the staggering number of sub-projects in neutron it is impossible for
the stable team to cope with the load. Delegation and decentralisation is a
must and both sub-project maintainers and the stable team will benefit from
it.
Also, since patches can always be reverted and rights revoked in case of
misbehaviour I do not see any major downside.
I am sure the stable maint team will periodically monitor what's being
backported in order to intervene quickly if backport policies are violated.

Salvatore



On 3 November 2015 at 18:09, Kyle Mestery  wrote:

> On Tue, Nov 3, 2015 at 10:49 AM, Ihar Hrachyshka 
> wrote:
>
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA1
>>
>> Hi all,
>>
>> currently we have a single neutron-wide stable-maint gerrit group that
>> maintains all stable branches for all stadium subprojects. I believe
>> that in lots of cases it would be better to have subproject members to
>> run their own stable maintenance programs, leaving
>> neutron-stable-maint folks to help them in non-obvious cases, and to
>> periodically validate that project wide stable policies are still honore
>> d.
>>
>> I suggest we open gate to creating subproject stable-maint teams where
>> current neutron-stable-maint members feel those subprojects are ready
>> for that and can be trusted to apply stable branch policies in
>> consistent way.
>>
>> Note that I don't suggest we grant those new permissions completely
>> automatically. If neutron-stable-maint team does not feel safe to give
>> out those permissions to some stable branches, their feeling should be
>> respected.
>>
>> I believe it will be beneficial both for subprojects that would be
>> able to iterate on backports in more efficient way; as well as for
>> neutron-stable-maint members who are often busy with other stuff, and
>> often times are not the best candidates to validate technical validity
>> of backports in random stadium projects anyway. It would also be in
>> line with general 'open by default' attitude we seem to embrace in
>> Neutron.
>>
>> If we decide it's the way to go, there are alternatives on how we
>> implement it. For example, we can grant those subproject teams all
>> permissions to merge patches; or we can leave +W votes to
>> neutron-stable-maint group.
>>
>> I vote for opening the gates, *and* for granting +W votes where
>> projects showed reasonable quality of proposed backports before; and
>> leaving +W to neutron-stable-maint in those rare cases where history
>> showed backports could get more attention and safety considerations
>> [with expectation that those subprojects will eventually own +W votes
>> as well, once quality concerns are cleared].
>>
>> If we indeed decide to bootstrap subproject stable-maint teams, I
>> volunteer to reach the candidate teams for them to decide on initial
>> lists of stable-maint members, and walk them thru stable policies.
>>
>> Comments?
>>
>>
> As someone who spends a considerable amount of time reviewing stable
> backports on a regular basis across all the sub-projects, I'm in favor of
> this approach. I'd like to be included when selecting teams which are
> approproate to have their own stable teams as well. Please include me when
> doing that.
>
> Thanks,
> Kyle
>
>
>> Ihar
>> -BEGIN PGP SIGNATURE-
>>
>> iQEcBAEBAgAGBQJWOOWkAAoJEC5aWaUY1u57sVIIALrnqvuj3t7c25DBHvywxBZV
>> tCMlRY4cRCmFuVy0VXokM5DxGQ3VRwbJ4uWzuXbeaJxuVWYT2Kn8JJ+yRjdg7Kc4
>> 5KXy3Xv0MdJnQgMMMgyjJxlTK4MgBKEsCzIRX/HLButxcXh3tqWAh0oc8WW3FKtm
>> wWFZ/2Gmf4K9OjuGc5F3dvbhVeT23IvN+3VkobEpWxNUHHoALy31kz7ro2WMiGs7
>> GHzatA2INWVbKfYo2QBnszGTp4XXaS5KFAO8+4H+HvPLxOODclevfKchOIe6jthH
>> F1z4JcJNMmQrQDg1WSqAjspAlne1sqdVLX0efbvagJXb3Ju63eSLrvUjyCsZG4Q=
>> =HE+y
>> -END PGP SIGNATURE-
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] proactive backporting

2015-10-16 Thread Salvatore Orlando
This sounds like a pretty decent idea to me. Considering Neutron's patch
merge rate this activity should hopefully not take a consistent chunk of
your Friday.
It might also make sense to ask contributors to resume the habit of tagging
bugs with 'backport-potential' even if not in the RC period.

I am glad to offer my help as well in evaluating "backport worthiness", and
the process you outlined looks very good to me.
If there's any discussion needed for assessing whether a bug fix should be
backported or not, we could either use the etherpad or launchpad, with a
slight preference for launchpad.

Salvatore

On 16 October 2015 at 16:19, Kyle Mestery  wrote:

> On Fri, Oct 16, 2015 at 7:33 AM, Ihar Hrachyshka 
> wrote:
>
>> Hi all,
>>
>> I’d like to introduce a new initiative around stable branches for neutron
>> official projects (neutron, neutron-*aas, python-neutronclient) that is
>> intended to straighten our backporting process and make us more proactive
>> in fixing bugs in stable branches. ‘Proactive' meaning: don’t wait until a
>> known bug hits a user that consumes stable branches, but backport fixes in
>> advance quickly after they hit master.
>>
>> The idea is simple: every Fri I walk thru the new commits merged into
>> master since last check; produce lists of bugs that are mentioned in
>> Related-Bug/Closes-Bug; paste them into:
>>
>> https://etherpad.openstack.org/p/stable-bug-candidates-from-master
>>
>> Then I click thru the bug report links to determine whether it’s worth a
>> backport and briefly classify them. If I have cycles, I also request
>> backports where it’s easy (== a mere 'Cherry-Pick to' button click).
>>
>> After that, those interested in maintaining neutron stable branches can
>> take those bugs one by one and handle them, which means: checking where it
>> really applies for backport; creating backport reviews (solving conflicts,
>> making tests pass). After it’s up for review for all branches affected and
>> applicable, the bug is removed from the list.
>>
>> I started on that path two weeks ago, doing initial swipe thru all
>> commits starting from stable/liberty spin off. If enough participants join
>> the process, we may think of going back into git history to backport
>> interesting fixes from stable/liberty into stable/kilo.
>>
>> Don’t hesitate to ask about details of the process, and happy backporting,
>>
>> Wow, this is a great idea Ihar! Thanks for taking this on! Count me in on
> helping with this effort as well.
>
> Thanks,
> Kyle
>
>
>> Ihar
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]What happened when the 3-rd controller restarted?

2015-10-15 Thread Salvatore Orlando
Hi Germy,

It seems that you're looking at solutions for ensuring consistency between
the "desired" configuration (Neutron), and the actual one (whatever is in
your backend) at startup.
This has been discussed several times in the past - not just for
synchronization at startup, but also for ensuring neutron and the backend
are in sync at each operation.

At a very high level I think a "general" solution is only partially
possible. At some point there must be a plugin interface that verifies
whether, for a given resource, data on the backend differ from those in
neutron.
The component which evaluates the result of such operation and updates the
status of the resources being synchronised could instead be shared across
plugins.
For the ML2 plugin I don't see any architectural difference, beyond the
fact that the plugin level operation should probably query all the
mechanism drivers.

Anyway, If this is something you'd like to see implemented (regardless of
whether my analysis matches your use case) you should considering filing a
RFE bug so that it will be considered during the drivers meetings.

Salvatore

On 14 October 2015 at 11:43, Germy Lure  wrote:

> Hi Salvatore and Kevin,
>
> I'm sorry for replying so late.
> I wanted to see whether the community had considered data sync for these
> two style(agent and controller) integration. To solve integrating multiple
> vendor's controllers, I need some help from community. That's the original
> purpose of this thread. In another word, I had no idea when I sent this
> message and I just asked some help.
>
> Anyway, the issues I mentioned last mail are exists. We still need face
> them. I have some rough ideas for your reference.
>
> 1.try best to keep the source is correct.
> Think about CREATE operation, if the backend was be in exception and
> Neutron is timeout, then the record should be destroyed or marked ERROR to
> warn the operator. If Neutron was be in exception, the backend will has an
> extra record. To avoid this, Neutron could store and mark a record
> CREATE_PENDING before push it to backend, then scan data and check with the
> backend after restarting when exception occurs. If the record in Neutron is
> extra, destroy or mark ERROR to warn the operator. UPDATE and DELETE need
> similar logic.
> Currently in Neutron, some objects have defined XX_PENDING and some not.
> 2.check each other when they restart.
> After restarting, the backend should report the states of all objects and
> may re-load data from Neutron to rebuild or check local data. When Neutron
> restarting, it should get data from backend and check it. Maybe, it can
> notify backend, and backend act as it just restarted.
> All in all, I think it's enough that keeping the data be correct when you
> write(CUD) it and check it when restarting.
>
> About implementation, I think a common frame is best. Plugins or even
> drivers just provide methods for backend to load data, update state and
> etc.
>
> As I mentioned earlier, this is just a rough and superficial idea. Any
> comment please.
>
> Thanks,
> Germy
> .
>
>
>
> On Tue, Oct 13, 2015 at 3:28 AM, Kevin Benton  wrote:
>
>> >*But there is no such a feature in Neutron. Right? Will the community
>> merge it soon? And can we consider it with agent-style mechanism together?*
>>
>> The agents have their own mechanisms for getting information from the
>> server. The community has no plans to merge a feature that is going to be
>> different for almost every vendor.
>>
>> We tried to come up with some common syncing stuff in the recent ML2
>> meeting, the various backends had different methods of detecting when they
>> were out of sync with Neutron (e.g. headers in hashes, recording errors,
>> etc), all of which depended on the capabilities of the backend. Then the
>> sync method itself was different between backends (sending deltas, sending
>> entire state, sending a replay log, etc).
>>
>> About the only thing they have in common is that they need a way detect
>> if they are out of sync and they need a method to sync. So that's two
>> abstract methods, and we likely can't even agree on when they should be
>> called.
>>
>> Echoing Salvatore's comments, what is it that you want to see?
>>
>> On Mon, Oct 12, 2015 at 12:29 AM, Germy Lure 
>> wrote:
>>
>>> Hi Kevin,
>>>
>>> *Thank you for your response. Periodic data checking is a popular and
>>> effective method to sync info. But there is no such a feature in Neutron.
>>> Right? Will the community merge it soon? And can we consider it with
>>> agent-style mechanism together?*
>>>
>>> Vendor-specific extension or coding a periodic task private by vendor is
>>> not a good solution, I think. Because it means that Neutron-Sever could not
>>> integrate with multiple vendors' controller and even the controller of
>>> those vendors that introduced this extension or task could not integrate
>>> with a standard community Neutron-Server.
>>> That is just the tip of the iceberg. Many of the other problems
>>

Re: [openstack-dev] [neutron]What happened when the 3-rd controller restarted?

2015-10-12 Thread Salvatore Orlando
Inline,
Salvatore

On 12 October 2015 at 09:29, Germy Lure  wrote:

> Hi Kevin,
>
> *Thank you for your response. Periodic data checking is a popular and
> effective method to sync info. But there is no such a feature in Neutron.
> Right? Will the community merge it soon? And can we consider it with
> agent-style mechanism together?*
>
> Vendor-specific extension or coding a periodic task private by vendor is
> not a good solution, I think. Because it means that Neutron-Sever could not
> integrate with multiple vendors' controller and even the controller of
> those vendors that introduced this extension or task could not integrate
> with a standard community Neutron-Server.
>

I am not sure what is the issue you are seeing here and what you are
advocating for.
If you're asking for a generic interface for synchronising the neutron
database with a backend, that could be implemented, but it would still be
up to plugin and driver maintainers to use that interface.



> That is just the tip of the iceberg. Many of the other problems resulting,
> such as fixing bugs,upgrade,patch and etc.
> But wait, is it a vendor-specific feature? Of course not. All software
> systems need data checking.
>

If you have something in mind I'd like to understand more about your use
case (I got the issue, I want to understand what you're trying to achieve),
and how you think you could possibly implement it.

>
> Many thanks.
> Germy
>
>
> On Sun, Oct 11, 2015 at 4:28 PM, Kevin Benton  wrote:
>
>> You can have a periodic task that asks your backend if it needs sync info.
>> Another option is to define a vendor-specific extension that makes it
>> easy to retrieve all info in one call via the HTTP API.
>>
>> On Sat, Oct 10, 2015 at 2:24 AM, Germy Lure  wrote:
>>
>>> Hi all,
>>>
>>> After restarting, Agents load data from Neutron via RPC. What about 3-rd
>>> controller? They only can re-gather data via NBI. Right?
>>>
>>> Is it possible to provide some mechanism for those controllers and
>>> agents to sync data? or something else I missed?
>>>
>>> Thanks
>>> Germy
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Kevin Benton
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Anyone tried to mix-use openstack components or projects?

2015-10-12 Thread Salvatore Orlando
Inline,
Salvatore

On 12 October 2015 at 10:23, Germy Lure  wrote:

> Thank you, Kevin.
> So the community just divided the whole openstack into separate
> sub-projects(Nova,Neutron and etc.) but it's not taken into account that if
> those modules can work together with different versions. Yes?
>

The developer community has been addressing this by ensuring, to some
extent, backward compatibility between the APIs used for communicating
across services. This is what allows a component at version X to operate
with another component at version Y.

In the case of Neutron and Nova, this is only done with REST over HTTP.
Other projects also use RPC over AMQP.
Neutron strived to be backward compatible since the v2 API was introduced
in Folsom. Therefore you should be able to run Neutron Kilo with Nova
Havana; as Kevin noted, you might want to disable notifications on the
Neutron side as the nova extension that processes them does not exist in
Havana.



>
> If so, is it possible to keep being compatible with each other in
> technology? How about just N+1? And how about just in Neutron?
>

While it is surely possible, enforcing this, as far as I can tell, is not a
requirement for Openstack projects. Indeed, it is not something which is
tested in the gate. It would be interesting to have it as a part of a
rolling upgrade test for an OpenStack cloud, where, for instance, you first
upgrade the networking service and then the compute service. But beyond
that I do not think the upstream developer community should provide any
additional guarantee, notwithstanding guarantees on API backward
compatibility.


> Germy
> .
>
> On Sun, Oct 11, 2015 at 4:33 PM, Kevin Benton  wrote:
>
>> For the particular Nova Neutron example, the Neutron Kilo API should
>> still be compatible with the calls Havana Nova makes. I think you will need
>> to disable the Nova callbacks on the Neutron side because the Havana
>> version wasn't expecting them.
>>
>> I've tried out many N+1 combinations (e.g. Icehouse + Juno, Juno + Kilo)
>> but I haven't tried a gap that big.
>>
>> Cheers,
>> Kevin Benton
>>
>> On Sat, Oct 10, 2015 at 1:50 AM, Germy Lure  wrote:
>>
>>> Hi all,
>>>
>>> As you know, openstack projects are developed separately. And
>>> theoretically, people can create networks with Neutron in Kilo version for
>>> Nova in Havana version.
>>>
>>> Did Anyone tried it?
>>> Do we have some pages to show what combination can work together?
>>>
>>> Thanks.
>>> Germy
>>> .
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Kevin Benton
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [pecan] [mistral] Pecan python3 compatibility

2015-10-01 Thread Salvatore Orlando
Or, since many OpenStack projects now use Pecan, we could fix this
ourselves as a thank you note to Pecan developers!

Salvatore

On 1 October 2015 at 21:08, Ryan Petrello 
wrote:

> Yep, this definitely looks like a Python3-specific bug.  If you'll open a
> ticket, I'll take a look as soon as I get a chance :)!
>
>
> On 10/01/15 02:44 PM, Doug Hellmann wrote:
>
>> Excerpts from Nikolay Makhotkin's message of 2015-10-01 16:50:04 +0300:
>>
>>> Hi, pecan folks!
>>>
>>> I have an question for you about python3 support in pecan library.
>>>
>>> In Mistral, we are trying to fix the codebase for supporting python3, but
>>> we are not able to do this since we faced issue with pecan library. If
>>> you
>>> want to see the details, see this traceback - [1].
>>> (Actually, something is wrong with HooksController and walk_controller
>>> method)
>>>
>>> Does pecan officially support python3 (especially, python3.4 or
>>> python3.5)
>>> or not?
>>> I didn't find any info about that in pecan repository.
>>>
>>> [1] http://paste.openstack.org/show/475041/
>>>
>>>
>> The intent is definitely to support python 3. This sounds like a bug, so
>> I recommend opening a ticket in the pecan bug tracker.
>>
>> Doug
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> --
> Ryan Petrello
> Senior Developer, DreamHost
> ryan.petre...@dreamhost.com
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-ovn][vtep] Proposal: support for vtep-gateway in ovn

2015-09-24 Thread Salvatore Orlando
Random comments inline.

Salvatore

On 24 September 2015 at 14:05, Russell Bryant  wrote:

> On 09/24/2015 01:17 AM, Amitabha Biswas wrote:
> > Hi everyone,
> >
> > I want to open up the discussion regarding how to support OVN
> > VTEP gateway deployment and its lifecycle in Neutron.
>
> Thanks a lot for looking into this!
>
> > In the "Life Cycle of a VTEP gateway" part in the OVN architecture
> > document (http://www.russellbryant.net/ovs-docs/ovn-architecture.7.pdf),
> > step 3 is where the Neutron OVN plugin is involved. At a minimum, the
> > Neutron OVN plugin will enable setting the type as "vtep" and the
> > vtep-logical-switch and vtep-physical-switch options in the
> > OVN_Northbound database.
>
> I have the docs published there just to make it easier to read the
> rendered version.  The source of that document is:
>
> https://github.com/openvswitch/ovs/blob/master/ovn/ovn-architecture.7.xml
>
> > There are 2 parts to the proposal/discussion - a short term solution and
> > a long term one:
> >
> > A short term solution (proposed by Russell Bryant) is similar to the
> > work that was done for container support in OVN - using a binding
> > profile http://networking-ovn.readthedocs.org/en/latest/containers.html.
> > A ovn logical network/switch can be mapped to a vtep logical gateway by
> > creating a port in that logical network and creating a binding profile
> > for that port in the following manner:
> >
> > neutron port-create --binding-profile
> > '{"vtep-logical-switch":"vtep_lswitch_key",
> > "vtep-physical-switch":"vtep_pswitch_key"}' private.
> >
> > Where vtep-logical-switch and vtep-physical-switch should have been
> > defined in the OVN_Southbound database by the previous steps (1,2) in
> > the life cycle.
>
> Yes, this sounds great to me.  Since there's not a clear well accepted
> API to use, we should go this route to get the functionality exposed
> more quickly.  We should also include in our documentation that this is
> not expected to be how this is done long term.
>
> The comparison to the containers-in-VMs support is a good one.  In that
> case we used binding:profile as a quick way to expose it, but we're
> aiming to support a proper API.  For that feature, we've identified the
> "VLAN aware VMs" API as the way forward, which will hopefully be
> available next cycle.
>
> > For the longer term solution, there needs to be a discussion:
> >
> > Should the knowledge about the physical and logical step gateway should
> > be exposed to Neutron - if yes how? This would allow a Neutron NB
> > API/extension to bind a “known” vtep gateway to the neutron logical
> > network. This would be similar to the workflow done in the
> > networking-l2gw extension
> > https://review.openstack.org/#/c/144173/3/specs/kilo/l2-gateway-api.rst
> >
> > 1. Allow the admin to define and manage the vtep gateway through Neutron
> > REST API.
> >
> > 2. Define connections between Neutron networks and gateways. This is
> > conceptually similar to Step 3 of the step gateway performed by the OVN
> > Plugin in the short term solution.
>
> networking-l2gw does seem to be the closest thing to what's needed, but
> it's not a small amount of work.  I think the API might need to be
> extended a bit for our needs.  A bigger concern for me is actually with
> some of the current implementation details.
>

It is indeed. While I like very much the solution based on binding profiles
it does not work very well from a UX perspective in environments where
operators control the whole cloud with openstack tools.


>
> One particular issue is that the project implements the ovsdb protocol
> from scratch.  The ovs project provides a Python library for this.  Both
> Neutron and networking-ovn use it, at least.  From some discussion, I've
> gathered that the ovs Python library lacked one feature that was needed,
> but has since been added because we wanted the same thing in
> networking-ovn.
>

My take here is that we don't need to use the whole implementation of
networking-l2gw, but only the APIs and the DB management layer it exposes.
Networking-l2gw provides a VTEP network gateway solution that, if you want,
will eventually be part of Neutron's "reference" control plane.
OVN provides its implementation; I think it should be possible to leverage
networking-l2gw either by pushing an OVN driver there, or implementing the
same driver in openstack/networking-ovn.


>
> The networking-l2gw route will require some pretty significant work.
> It's still the closest existing effort, so I think we should explore it
> until it's absolutely clear that it *can't* work for what we need.
>

I would say that it is definitely not trivial but probably a bit less than
"significant". abhraut from my team has done something quite similar for
openstack/vmware-nsx [1]


> > OR
> >
> > Should OVN pursue it’s own Neutron extension (including vtep gateway
> > support).
>
> I don't think this option provides a lot of value over the short term
> binding:profile s

Re: [openstack-dev] [neutron] Neutron debugging tool

2015-09-22 Thread Salvatore Orlando
Thanks Ganesh!

I did not know about this tool.
I also quite like the network visualization bits, though I wonder how
practical that would be when one debugs very large deployments.

I think it won't be a bad idea to list these tools in the networking guide
or in neutron's devref, or both.

Salvatore

On 22 September 2015 at 04:25, Ganesh Narayanan (ganeshna) <
ganes...@cisco.com> wrote:

> Another project for diagnosing OVS in Neutron:
>
> https://github.com/CiscoSystems/don
>
> Thanks,
> Ganesh
>
> From: Salvatore Orlando 
> Reply-To: OpenStack Development Mailing List <
> openstack-dev@lists.openstack.org>
> Date: Monday, 21 September 2015 2:55 pm
> To: OpenStack Development Mailing List 
> Subject: Re: [openstack-dev] [neutron] Neutron debugging tool
>
> It sounds like indeed that easyOVS covers what you're aiming too.
> However, from what I gather there is still plenty to do in easy OVS, so
> perhaps rather than starting a new toolset from scratch you might build on
> the existing one.
>
> Personally I'd welcome its adoption into the Neutron stadium as debugging
> control plane/data plane issues in the neutron reference impl is becoming
> difficult also for expert users and developers.
> I'd just suggest renaming it because calling it "OVS" is just plain wrong.
> The neutron reference implementation and OVS are two distinct things.
>
> As concern neutron-debug, this is a tool that was developed in the early
> stages of the project to verify connectivity using "probes" in namespaces.
> These probes are simply tap interfaces associated with neutron ports. The
> neutron-debug tool is still used in some devstack exercises. Nevertheless,
> I'd rather keep building something like easyOVS and then deprecated
> neutron-debug rather than develop it.
>
> Salvatore
>
>
> On 21 September 2015 at 02:40, Li Ma  wrote:
>
>> AFAIK, there is a project available in the github that does the same
>> thing.
>> https://github.com/yeasy/easyOVS
>>
>> I used it before.
>>
>> On Mon, Sep 21, 2015 at 12:17 AM, Nodir Kodirov 
>> wrote:
>> > Hello,
>> >
>> > I am planning to develop a tool for network debugging. Initially, it
>> > will handle DVR case, which can also be extended to other too. Based
>> > on my OpenStack deployment/operations experience, I am planning to
>> > handle common pitfalls/misconfigurations, such as:
>> > 1) check external gateway validity
>> > 2) check if appropriate qrouter/qdhcp/fip namespaces are created in
>> > compute/network hosts
>> > 3) execute probing commands inside namespaces, to verify reachability
>> > 4) etc.
>> >
>> > I came across neutron-debug [1], which mostly focuses on namespace
>> > debugging. Its coverage is limited to OpenStack, while I am planning
>> > to cover compute/network nodes as well. In my experience, I had to ssh
>> > to the host(s) to accurately diagnose the failure (e.g., 1, 2 cases
>> > above). The tool I am considering will handle these, given the host
>> > credentials.
>> >
>> > I'd like get community's feedback on utility of such debugging tool.
>> > Do people use neutron-debug on their OpenStack environment? Does the
>> > tool I am planning to develop with complete diagnosis coverage sound
>> > useful? Anyone is interested to join the development? All feedback are
>> > welcome.
>> >
>> > Thanks,
>> >
>> > - Nodir
>> >
>> > [1]
>> http://docs.openstack.org/cli-reference/content/neutron-debug_commands.html
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> --
>>
>> Li Ma (Nick)
>> Email: skywalker.n...@gmail.com
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron debugging tool

2015-09-21 Thread Salvatore Orlando
It sounds like indeed that easyOVS covers what you're aiming too.
However, from what I gather there is still plenty to do in easy OVS, so
perhaps rather than starting a new toolset from scratch you might build on
the existing one.

Personally I'd welcome its adoption into the Neutron stadium as debugging
control plane/data plane issues in the neutron reference impl is becoming
difficult also for expert users and developers.
I'd just suggest renaming it because calling it "OVS" is just plain wrong.
The neutron reference implementation and OVS are two distinct things.

As concern neutron-debug, this is a tool that was developed in the early
stages of the project to verify connectivity using "probes" in namespaces.
These probes are simply tap interfaces associated with neutron ports. The
neutron-debug tool is still used in some devstack exercises. Nevertheless,
I'd rather keep building something like easyOVS and then deprecated
neutron-debug rather than develop it.

Salvatore


On 21 September 2015 at 02:40, Li Ma  wrote:

> AFAIK, there is a project available in the github that does the same thing.
> https://github.com/yeasy/easyOVS
>
> I used it before.
>
> On Mon, Sep 21, 2015 at 12:17 AM, Nodir Kodirov 
> wrote:
> > Hello,
> >
> > I am planning to develop a tool for network debugging. Initially, it
> > will handle DVR case, which can also be extended to other too. Based
> > on my OpenStack deployment/operations experience, I am planning to
> > handle common pitfalls/misconfigurations, such as:
> > 1) check external gateway validity
> > 2) check if appropriate qrouter/qdhcp/fip namespaces are created in
> > compute/network hosts
> > 3) execute probing commands inside namespaces, to verify reachability
> > 4) etc.
> >
> > I came across neutron-debug [1], which mostly focuses on namespace
> > debugging. Its coverage is limited to OpenStack, while I am planning
> > to cover compute/network nodes as well. In my experience, I had to ssh
> > to the host(s) to accurately diagnose the failure (e.g., 1, 2 cases
> > above). The tool I am considering will handle these, given the host
> > credentials.
> >
> > I'd like get community's feedback on utility of such debugging tool.
> > Do people use neutron-debug on their OpenStack environment? Does the
> > tool I am planning to develop with complete diagnosis coverage sound
> > useful? Anyone is interested to join the development? All feedback are
> > welcome.
> >
> > Thanks,
> >
> > - Nodir
> >
> > [1]
> http://docs.openstack.org/cli-reference/content/neutron-debug_commands.html
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
>
> Li Ma (Nick)
> Email: skywalker.n...@gmail.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Pecan and Liberty-3

2015-08-28 Thread Salvatore Orlando
I'll leave to Kevin's more informed judgment to comment on whether it is
appropriate to merge:

[1] is a list of patches still under review on the feature branch. Some of
them fix issues (like executing API actions), or implement TODOs

This is the current list of TODOs:
salvatore@ubuntu:/opt/stack/neutron$ find ./neutron/newapi/ -name \*.py |
xargs grep -n "TODO"
./neutron/newapi/hooks/context.py:50:# TODO(kevinbenton): is_admin
logic
./neutron/newapi/hooks/notifier.py:22:# TODO(kevinbenton): implement
./neutron/newapi/hooks/member_action.py:28:# TODO(salv-orlando):
This hook must go. Handling actions like this is
./neutron/newapi/hooks/quota_enforcement.py:33:#
TODO(salv-orlando): This hook must go when adaptin the pecan code to
./neutron/newapi/hooks/attribute_population.py:59:#
TODO(kevinbenton): the parent_id logic currently in base.py
./neutron/newapi/hooks/ownership_validation.py:34:    #
TODO(salvatore-orlando): consider whether this check can be folded
./neutron/newapi/app.py:40:#TODO(kevinbenton): error templates
./neutron/newapi/controllers/root.py:150:# TODO(kevinbenton): allow
fields after policy enforced fields present
./neutron/newapi/controllers/root.py:160:# TODO(kevinbenton): bulk!
./neutron/newapi/controllers/root.py:190:# TODO(kevinbenton): bulk?
./neutron/newapi/controllers/root.py:197:# TODO(kevinbenton): bulk?

In my opinion the pecan API now is "working-ish"; however we know it is not
yet 100% functionally equivalent; but most importantly we don't know how it
works. So far a few corners have bet cut when it comes to testing.
Even if "it works" it is therefore probably usable. Unfortunately I don't
know what are the criteria the core team evaluates for merging it back (and
I'm sure that for this release at least the home grown WSGI won't be
replaced).

Salvatore

[1]
https://review.openstack.org/#/q/status:open+project:openstack/neutron+branch:feature/pecan,n,z

On 28 August 2015 at 22:51, Kyle Mestery  wrote:

> Folks:
>
> Kevin wants to merge the pecan stuff, and I agree with him. I'm on
> vacation next week during Liberty-3, so Armando, Carl and Doug are running
> the show while I'm out. I would guess that if Kevin thinks it's ok to merge
> it in before Liberty-3, I'd go with his opinion and let it happen. If not,
> it can get an FFE and we can do it post Liberty-3.
>
> I'm sending this to the broader openstack-dev list so that everyone can be
> aware of this plan, and so that Ihar can help collapse things back next
> week with Doug on this.
>
> Thanks!
> Kyle
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] 40% failure on neutron python3.4 tests in the gate

2015-08-28 Thread Salvatore Orlando
On 28 August 2015 at 16:57, Sean Dague  wrote:

> On 08/28/2015 11:20 AM, Assaf Muller wrote:
> > To recap, we had three issues impacting the gate queue:
> >
> > 1) The neutron functional job has had a high failure rate for a while
> > now. Since it's impacting the gate,
> > I've removed it from the gate queue but kept it in the Neutron check
> queue:
> > https://review.openstack.org/#/c/218302/
> >
> > If you'd like to help, the the list of bugs impacting the Neutron
> > functional job is linked in that patch.
> >
> > 2) A new Tempest scenario test was added that caused the DVR job failure
> > rate to sky rocket to over 50%.
> > It actually highlighted a legit bug with DVR and legacy routers. Kevin
> > proposed a patch that skips that test
> > entirely until we can resolve the bug in Neutron:
> > https://review.openstack.org/#/c/218242/ (Currently it tries to skip the
> > test conditionally, the next PS will skip the test entirely).
> >
> > 3) The Neutron py34 job has been made unstable due to a recent change
> > (By me, yay) that made the tests
> > run with multiple workers. This highlighted an issue with the Neutron
> > unit testing infrastructure, which is fixed here:
> > https://review.openstack.org/#/c/217379/
> >
> > With all three patches merged we should be good to go.
>
> Well, with all 3 of these we should be much better for sure. There are
> probably additional issues causing intermittent failures which should be
> looked at. These 3 are definitely masking anything else.
>

Sadly, since the issues are independent, it is very likely for one of the
patch to fail jenkins tests for one of the other two issues.
If the situation persists is it crazy to conside switching neutron-py34 and
neutron-functional to non-voting until these patches merge.
Neutron cores might abstain from approving patches (unless trivial or
documentation) while these jobs are non-voting.


>
> https://etherpad.openstack.org/p/gate-fire-2015-08-28 is a set of
> patches to promote for things causing races in the gate (we've got a
> cinder one was well). If other issues are known with fixes posted,
> please feel free to add them with comments.
>



>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Kuryr - virtual sprint

2015-08-19 Thread Salvatore Orlando
Hi Gal,

even if I've been a lurker so far, I'm interested in attending for learning
and contributing to it with my massive bug-injecting skills!

You said "virtual sprint" and "somewhere in september" - I think
"somewhere" refers to dates?
Anyway I am pretty much open to any date from September 7th onwards.

Salvatore


On 19 August 2015 at 19:57, Gal Sagie  wrote:

> Hello everyone,
>
> During our last meeting an idea was brought up that we try to do a virtual
> sprint
> for Kuryr somewhere in September.
>
> Basically the plan is very similar to the mid cycle sprints or feature
> sprints where
> we iterate on couple of tasks online and finish gaps we might have in
> Kuryr.
> (I think we are talking about 2-3 days)
>
> The agenda for the sprint is dependent on the amount of work we finish by
> then,
> but it will probably consist of containerising some of the common plugins
> and connecting
> things end to end. (for host networking)
>
> I started this email in order to find the best dates for it, so if you
> plan on participating
> please share your preferred dates (anyone that has a Neutron plugin might
> want to offer a containerised version of it with Kuryr to integrate with
> Docker and lib network and the sprint
> is probably a good place to start doing it)
>
> Thanks
> Gal.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Etherpad from the Ops Meetup

2015-08-19 Thread Salvatore Orlando
The etherpad contains some complaints around DVR implementation that might
deserve furhter exploration.
However, as pointed out by Jay, the comments made leave very little room
for actionable items.
It would be great if the author(s) could fill in with more details.

Salvatore

On 19 August 2015 at 23:11, Ryan Moats  wrote:

> One thing came up during lunch was including unit and functional testing
> of dual stack in the check and gate queues - I was regaled over lunch with
> one operator's experiences in trying to run Neutron on a dual stack system.
>
> Ryan Moats (regXboi)
>
> Edgar Magana  wrote on 08/19/2015 03:43:44 PM:
>
> > From: Edgar Magana 
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> > Date: 08/19/2015 03:44 PM
>
> > Subject: Re: [openstack-dev] [Neutron] Etherpad from the Ops Meetup
> >
> > Actually, there were very few requirements collected. So, your
> > summary is correct.
> >
> > I feel that this time we did not get a lot of input s we got during
> > the Ops meet-up in Philadelphia.
> >
> > I also recommend to read the burning issues ether pads, there are
> > few suggestions on the networking side. Actually, I believe
> > Operators has expressed in this session some good feedback that they
> > probaly did not want to repeat during the networking section.
> >
> > https://etherpad.openstack.org/p/PAO-ops-burning-issues
> >
> > Cheers,
> >
> > Edgar
> >
> > From: Assaf Muller
> > Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> > Date: Wednesday, August 19, 2015 at 1:34 PM
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > Subject: Re: [openstack-dev] [Neutron] Etherpad from the Ops Meetup
> >
> > On Wed, Aug 19, 2015 at 2:52 PM, Edgar Magana  > > wrote:
> > Folks,
> >
> > I just want to share with you the feedback collected today during
> > the networking session on Ops Meet-up:
> > https://etherpad.openstack.org/p/PAO-ops-network-model
> >
> > Special thanks to Ryan and Doug for helping on some questions.
> >
> > The only action items for Neutron developers that I can spot are:
> > 1. Linux bridge + DVR / multi host
> > 2. Prevent data loss when restarting the OVS agent (The patch [1] is
> > very close to merge anyway, nothing more to do here)
> > 3. Work as described by [2] (Big deployers team)
> > The rest is either polling (Who uses what feature / plugin / etc) or
> > generic comments with no actionable bugs or RFEs.
> > Did I miss anything?
> >
> > [1] https://review.openstack.org/#/c/182920/
> > [2] https://etherpad.openstack.org/p/Network_Segmentation_Usecases
> >
> > Cheers,
> >
> > Edgar
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][bgpvpn] Service Plugin vs Service driver

2015-08-19 Thread Salvatore Orlando
my 0.02€ on the matter inline.

Regards,
Salvatore

On 18 August 2015 at 23:45, Mathieu Rohon  wrote:

> hi brandon,
>
> thanks for your answer.
>
> my answers inline,
>
>
>
> On Tue, Aug 18, 2015 at 8:53 PM, Brandon Logan <
> brandon.lo...@rackspace.com> wrote:
>
>> ​So let me make sure I understand this. You want to do a separate service
>> plugin for what would normally be separate drivers under one service
>> plugin.  The reasons for this are:
>>
>>
>> 1. You dont want users the ability to choose the type, you want it always
>> to be the same one
>>
> While in theory it is be possible to have multiple BGPVPN providers in the
same deployment, there are control and data plane aspects that the service
type framework at the moment cannot deal with it. Mathieu brought some
examples in the bug report. The bottom line appears to be that the choice
of the l3 service plugin (or whatever serves l3 in your deployment) also
dictates the choiche of the BGPVPN service provider to employ.

> 2. Some types do want to be the source of truth of the data stored,
>> instead of it being the service plugin database.
>>
> This point has little to do with service types. It's about the fact that
plugins are not required to implemented the various db mixins in neutron.db
and therefore not required to use the neutron DB.

>
>> First, let me address the possibility of a solution using one service
>> plugin and multiple drivers per type:
>>
>>
>> I think that you can overcome #1 in the instantiation of the service
>> plugin to check if there are more than 1 provider active, if so you can
>> just throw an exception saying you can only have 1.  I'd have to look at it
>> more to see if there are any caveats to this, but I think that would work.
>>
>>
>> For #2, assuming #1 works, then the drivers that are defined can have
>> some boolean that they set that will tell the plugin whether they are the
>> source of truth or not, and depending on that you can store the data in the
>> service plugin's db or just pass the data along, also pass GET requests to
>> the drivers as well.
>>
>>
> I agree that those workarounds will surely works but I wonder what is the
> meaning of a service plugin/type that can only support one service
> provider? can't the service plugin be the service provider directly?
>

I believe there is some value, but I am not able to quantify it at the
moment.
- A single service plugin also implies (more or less) a common user-facing
APIs. I really don't want to end up in a conditons where the user API looks
different (or the workflow is different) according to what's backing the
neutron BGPVPN implementation
- A single service plugin provides a commonplace for all the boilerplate
management logic. This works for most drivers, but not for those who don't
rely on neutron DB as a data source (unless you manage to build a
sqlalchemy dialect for things such as opencontrail APIs, but I seriously
doubt that it would be feasible)
- Distinct service plugins might lead to different workflows. This is not
necessarily a bad thing, because integration for some backends might need
it. However this means that during review phase particular attention should
be paid to ensure the behaviour of each service plugin respects the API
specification.


>
> The reasons why I'm considering this change are :
>
> 1. I'm not sure we would have some use cases where we would be able to
> choose one bgpvpn backend independently from the provider of the core
> plugin (or a mech driver in the ML2 case) and/or the router plugin.
> If one use ODL to manage its core resources, he won't be able to use nuage
> or contrail to manage its bgpvpn connection.
> The bgpvpn project is more about having a common API than having the
> capacity to mix backends. At least for the moment.
>

I agree with this; but this problem exists regardless of whether you have a
single service plugin with drivers or multiple service plugins. You are
unlikely to be able to use the contrail BGPVPN service plugin is core and
l3 are managed by ODL, I think.


>
> 2. I'm also considering that each plugin, which would be backend
> dependent, could declare what features it supports through the use of
> extensions.
>

Unfortunately extensions are the only way to declare supported capabilities
at the moment. But please - don't end up allowing each service plugin
exposing a different API.


> Each plugin would be a "bgpvpn" service type, and would implement the
> bgpvpn extension, but some of them could extend the bgpvpn_connection
> resource with other extensions also hosted in the bgpvpn project. Since
> some backends only support attachment of networks to a bgpvpn_connection,
> others support attachment of routers, and others both attachments, I'm
> considering having an extension for each type of attachment. Then the
> bgpvpn plugin declares what extensions it supports and the end user can act
> accordingly depending on the scan of neutron extensions.
>

This is not good. It appears that y

Re: [openstack-dev] [Stable][Nova] VMware NSXv Support

2015-08-13 Thread Salvatore Orlando
On 13 August 2015 at 09:50, John Garbutt  wrote:

> On Wednesday, August 12, 2015, Thierry Carrez 
> wrote:
>
>> Gary Kotton wrote:
>> >
>> > On 8/12/15, 12:12 AM, "Mike Perez"  wrote:
>> >> On 15:39 Aug 11, Gary Kotton wrote:
>> >>> On 8/11/15, 6:09 PM, "Jay Pipes"  wrote:
>> >>>
>>  Are you saying that *new functionality* was added to the stable/kilo
>>  branch of *Neutron*, and because new functionality was added to
>>  stable/kilo's Neutron, that stable/kilo *Nova* will no longer work?
>> >>>
>> >>> Yes. That is exactly what I am saying. The issues is as follows. The
>> >>> NSXv
>> >>> manager requires the virtual machines VNIC index to enable the
>> security
>> >>> groups to work. Without that a VM will not be able to send and receive
>> >>> traffic. In addition to this the NSXv plugin does not have any agents
>> so
>> >>> we need to do the metadata plugin changes to ensure metadata support.
>> So
>> >>> effectively with the patches: https://review.openstack.org/209372 and
>> >>> https://review.openstack.org/209374 the stable/kilo nova code will
>> not
>> >>> work with the stable/kilo neutron NSXv plugin.
>> >> 
>> >>
>> >>> So what do you suggest?
>> >>
>> >> This was added in Neutron during Kilo [1].
>> >>
>> >> It's the responsibility of the patch owner to revert things if
>> something
>> >> doesn't land in a dependency patch of some other project.
>> >>
>> >> I'm not familiar with the patch, but you can see if Neutron folks will
>> >> accept
>> >> a revert in stable/kilo. There's no reason to get other projects
>> involved
>> >> because this wasn't handled properly.
>> >>
>> >> [1] - https://review.openstack.org/#/c/144278/
>> >
>> > So you are suggesting that we revert the neutron plugin? I do not think
>> > that a revert is relevant here.
>>
>> Yeah, I'm not sure reverting the Neutron patch would be more acceptable.
>> That one landed in Neutron kilo in time.
>>
>> The issue here is that due to Nova's review velocity during the kilo
>> cycle (and arguably the failure to raise this as a cross-project issue
>> affecting the release), the VMware NSXv support was shipped as broken in
>> Kilo, and requires non-trivial changes to get fixed.
>
>
> I see this as Nova not shipping with VMware NSXv support in kilo, the
> feature was never completed, rather than it being broken. I could be
> missing something, but I also know that difference doesn't really help
> anyone.
>
>
>> We have two options: bending the stable rules to allow the fix to be
>> backported, or document it as broken in Kilo with the invasive patches
>> being made available for people and distributions who still want to
>> apply it.
>>
>> Given that we are 4 months into Kilo, I'd say stable/kilo users are used
>> to this being broken at this point, so my vote would go for the second
>> option.
>
>
> This would be backporting a new driver to an older release. That seems
> like a bad idea.
>
>
>> That said, we should definitely raise [1] as a cross-project issue and
>> see how we could work it into Liberty, so that we don't end up in the
>> same dark corner in 4 months. I just don't want to break the stable
>> rules (and the user confidence we've built around us applying them) to
>> retroactively pay back review velocity / trust issues within Nova.
>>
>> [1] https://review.openstack.org/#/c/165750/
>>
>>
> So this is the same issue. The VMware neutron driver has merged support
> for a feature where we have not managed to get into Nova yet.
>
> First the long term view...
>
> This is happening more frequently with Cinder drivers/features, Neutron
> things, and to a lesser extent Glance.
>
> The great work the Cinder folks have done with brick, is hopefully going
> to improve the situation for Cinder. There are a group of folks working on
> a similar VIF focused library to help making it easier to add support for
> new Neutron VIF drivers without needing to merge things in Nova.
>
> Right now those above efforts are largely focused on libvirt, but using
> oslo.vmware, or probably something else, I am sure we could evolve
> something similar for VMware, but I haven't dug into that.
>

That is definetely the way to go in my opinion. I reckon VIF plugging is an
area where there is a lot of coupling with Neutron, and "decentralizing"
will be definetely beneficial for both contributors and reviewers. It
should be ok to have a VMware-specific VIF library - it would not work
really like cinderbrick, but from the nova perspective I think this does
not matter.


>
> There are lots of coding efforts and process efforts to make the most of
> our current review bandwidth and to expand that bandwidth, but I don't
> think it's helpful to get into that here.
>
> So, more short term and specific points...
>
> This patch had no bug or blueprint attached. It eventually got noticed a
> few weeks after the blueprint freeze. It's hard to track cross project
> dependencies if we don't know they exist. None of the various escalation
> paths raised this 

Re: [openstack-dev] [neutron] Race conditions in fwaas that impact the gate

2015-08-11 Thread Salvatore Orlando
I have been hit by these failures as well.
I think you did well by bumping out that revert from the queue; I think it
simply cures the sympton possibly affecting correct operations of the
firewall service.
If we are looking at removing the sympton on the API job, than I'd skip the
failing tests while somebody figures out what's going on (unless the team
decides that it is better to revert again multiple workers).

However, I think the issue might not be limited at firewall. I've seen a
worrying spike in rally failures [1]. Since it's non-voting probably
developers do not care a lot about it, but it provides very useful
insights. I am looking at rally logs now - at the moment I have not yet a
clear idea of the root cause of such failures.

Salvatore

[1]
http://graphite.openstack.org/render/?width=840&height=308&_salt=1439335659.449&target=hitcount%28stats.zuul.pipeline.check.job.gate-rally-dsvm-neutron-neutron.FAILURE%2C%221h%22%29&from=-72hours


On 12 August 2015 at 00:21, Sean M. Collins  wrote:

> Hello,
>
> Today has been an exciting day, to say the least. Earlier today I was
> pinged on IRC about some firewall as a service unit test failures that
> were blocking patches from being merged, such as
> https://review.openstack.org/#/c/211537/.
>
> Neutron devs started poking around a bit and discussing on the IRC channel.
>
>
> http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2015-08-11.log.html#t2015-08-11T16:59:13
>
> I've started to dig a little bit and document what I've found on this
> bug.
>
> https://bugs.launchpad.net/neutron/+bug/1483875
>
> There was a change recently merged in devstack-gate which changes the
> MySQL database driver and the number of workers -
> https://review.openstack.org/#/c/210649/
> which might be what is triggering the race condition - but I'm honestly
> not sure.
>
> I proposed a revert to a section of the FwaaS code, but frankly I'm not
> sure if this will fix the problem - https://review.openstack.org/211677
> - so I bumped it out of the merge queue when my anxiety reached maximum.
> I'm just not confident enough about my knowledge of the FwaaS codebase
> to really be making these kinds of changes.
>
> Is there anyone that has any insights?
>
>
> --
> Sean M. Collins
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Merge back of QoS and pecan branches

2015-08-10 Thread Salvatore Orlando
Kyle,

I can speak very little about the QoS branch, but from what I gather it is
mature enough to be merged back.
However, I believe the Pecan work is still incomplete as we need a solution
to run the RPC over AMQP server independently. Once we have that we can
start merging back what we have.

Salvatore

On 8 August 2015 at 04:39, Kyle Mestery  wrote:

> As we're beginning to wind down Liberty-3 in a few weeks, I'd like to
> present the rough, high level plan to merge back the QoS and pecan branches
> into Neutron. Ihar has been doing a great job shepherding the QoS work, and
> I believe once we're done landing the final patches this weekend [1], we
> can look to merge this branch back next week.
>
> The pecan branch [2] has a few patches left, but it's also my
> understanding from prior patches we'll need some additional testing done.
> Kevin, what else is left here? I'd like to see if we could merge this
> branch back the following week. I'd also like to hear your comments on
> enabling the pecan WSGI layer by default for Liberty and what additional
> testing is needed (if any) to make that happen.
>
> Thanks!
> Kyle
>
> [1]
> https://review.openstack.org/#/q/project:openstack/neutron+branch:feature/qos+status:open,n,z
> [2]
> https://review.openstack.org/#/q/project:openstack/neutron+branch:feature/pecan+status:open,n,z
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova, cinder, neutron] quota-update tenant-name bug

2015-07-31 Thread Salvatore Orlando
More comments inline.

Salvatore

On 31 July 2015 at 01:47, Kevin Benton  wrote:

> The issue is that the Neutron credentials might not have privileges to
> resolve the name to a UUID. I suppose we could just fail in that case.
>
>
As quota-update is usually restricted to admin users this should not be a
problem, unless the deployment uses per-service admin users.



> Let's see what happens with the nova spec Salvatore linked.
>

That spec seems stuck to me. I think the reason is lack of reasons for
raising its priority.


>
> On Thu, Jul 30, 2015 at 4:33 PM, Fox, Kevin M  wrote:
>
>> If the quota update resolved the name to a uuid before it updated the
>> quota by uuid, I think it would resolve the issues? You'd just have to
>> check if keystone was in use, and then do the extra resolve on update. I
>> think the rest of the stuff can just remain using uuids?
>>
>
Once you accept that it's not a big deal to do a round trip to keystone,
then we can do whatever we want. If there is value from a API usability
perspective we'll just do that.
If the issue is instead more the CLI UX, I would consider doing resolving
the name (and possibly validating the tenant uuid) in python-neutronclient.

Also, I've checked the docs [1] and [2] and neutron quota-update is not
supposed to accept tenant name - so probably the claim made in the initial
post on this thread did not apply to neutron after all.


>> Thanks,
>> Kevin
>> --
>> *From:* Kevin Benton [blak...@gmail.com]
>> *Sent:* Thursday, July 30, 2015 4:22 PM
>>
>> *To:* OpenStack Development Mailing List (not for usage questions)
>> *Subject:* Re: [openstack-dev] [nova, cinder, neutron] quota-update
>> tenant-name bug
>>
>> Good point. Unfortunately the other issues are going to be the hard part
>> to deal with. I probably shouldn't have brought up performance as a
>> complaint at this stage. :)
>>
>> On Thu, Jul 30, 2015 at 3:26 AM, Fox, Kevin M  wrote:
>>
>>> Can a non admin update quotas? Quota updates are rare. Performance of
>>> them can take the hit.
>>>
>>> Thanks,
>>> Kevin
>>>
>>> --
>>> *From:* Kevin Benton
>>> *Sent:* Wednesday, July 29, 2015 10:44:49 PM
>>> *To:* OpenStack Development Mailing List (not for usage questions)
>>> *Subject:* Re: [openstack-dev] [nova, cinder, neutron] quota-update
>>> tenant-name bug
>>>
>>> >Dev lessons learned: we need to validate better our inputs and refuse
>>> to update a tenant-id that does not exist.
>>>
>>> This is something that has come up in Neutron discussions before. There
>>> are two issues here:
>>> 1. Performance: it will require a round-trip to Keystone on every
>>> request.
>>> 2. If the Neutron keystone user in unprivileged and the request context
>>> is unprivileged, we might not actually be allowed to tell if the tenant
>>> exists.
>>>
>>> The first we can deal with, but the second is going to be an issue that
>>> we might not be able to get around.
>>>
>>> How about as a temporary solution, we just confirm that the input is a
>>> UUID so names don't get used?
>>>
>>> On Wed, Jul 29, 2015 at 10:19 PM, Bruno L 
>>> wrote:
>>>
 This is probably affecting other people as well, so hopefully message
 will avoid some headaches.

 [nova,cinder,neutron] will allow you to do a quota-update using the
 tenant-name (instead of tenant-id). They will also allow you to do a
 quota-show tenant-name and get the expected values back.

 Then you go to the tenant and end up surprised that the quotas have not
 been applied and you can still do things you were not supposed to.

 It turns out that [nova,cinder,neutron] just created an entry on the
 quota table, inserting the tenant-name on the tenant-id field.

 "Surprise, surprise!"

 Ops lessons learned: use the tenant-id!

 Dev lessons learned: we need to validate better our inputs and refuse
 to update a tenant-id that does not exist.

 I have documented this behaviour on
 https://bugs.launchpad.net/neutron/+bug/1399065 and
 https://bugs.launchpad.net/neutron/+bug/1317515. I can reproduce it in
 IceHouse.

 Could someone please confirm if this is still the case on master? If
 not, which version of OpenStack addressed that?

 Thanks,
 Bruno


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> --
>>> Kevin Benton
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>

Re: [openstack-dev] [nova, cinder, neutron] quota-update tenant-name bug

2015-07-29 Thread Salvatore Orlando
To the best of my knowledge Neutron is unable to enforce tenant quotas
using the tenant name; this should be "undocumented".
What Kevin suggests also goes in this direction, even if we have to be
careful as we're making assumptions on how tenant ids are represented (if
the deployment is not using Keystone, for instance, they could be anything).

Quotas are enforce by checking the tenant_id for which a resource is being
created is not already using all its quota of this resource.
Neutron does not have any logic for resolving the tenant name into its
identifier in this process.

The validation of the tenant identifier is something that goes beyond quota
management. Users with admin credentials can create networks and other
resources for random tenants that do not exist. Validation of the tenant id
might make sense, but, as Kevin said, must be performed by Keystone.
Therefore in order to avoid an extra round trip I would personally try and
perform this task in the keystonemiddleware step (the one that does
authentication too).

Nevertheless there is a deferred nova spec [1] and patch [2] aiming at
performing exactly what's asked for here - validating the tenant id when
setting up quotas. I personally think we should seek a solution for
validating the tenant_id for every request (if the operator wishes to do
so).

Salvatore

[1]
http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/validate-tenant-user-with-keystone.html
[2] https://review.openstack.org/#/c/143934/



On 30 July 2015 at 07:44, Kevin Benton  wrote:

> >Dev lessons learned: we need to validate better our inputs and refuse to
> update a tenant-id that does not exist.
>
> This is something that has come up in Neutron discussions before. There
> are two issues here:
> 1. Performance: it will require a round-trip to Keystone on every request.
> 2. If the Neutron keystone user in unprivileged and the request context is
> unprivileged, we might not actually be allowed to tell if the tenant exists.
>
> The first we can deal with, but the second is going to be an issue that we
> might not be able to get around.
>
> How about as a temporary solution, we just confirm that the input is a
> UUID so names don't get used?
>
> On Wed, Jul 29, 2015 at 10:19 PM, Bruno L  wrote:
>
>> This is probably affecting other people as well, so hopefully message
>> will avoid some headaches.
>>
>> [nova,cinder,neutron] will allow you to do a quota-update using the
>> tenant-name (instead of tenant-id). They will also allow you to do a
>> quota-show tenant-name and get the expected values back.
>>
>> Then you go to the tenant and end up surprised that the quotas have not
>> been applied and you can still do things you were not supposed to.
>>
>> It turns out that [nova,cinder,neutron] just created an entry on the
>> quota table, inserting the tenant-name on the tenant-id field.
>>
>> "Surprise, surprise!"
>>
>> Ops lessons learned: use the tenant-id!
>>
>> Dev lessons learned: we need to validate better our inputs and refuse to
>> update a tenant-id that does not exist.
>>
>> I have documented this behaviour on
>> https://bugs.launchpad.net/neutron/+bug/1399065 and
>> https://bugs.launchpad.net/neutron/+bug/1317515. I can reproduce it in
>> IceHouse.
>>
>> Could someone please confirm if this is still the case on master? If not,
>> which version of OpenStack addressed that?
>>
>> Thanks,
>> Bruno
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Kevin Benton
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Kuryr][kolla] - Bringing Dockers networking to Neutron

2015-07-23 Thread Salvatore Orlando
For my low-orbit perspective (I would have lied if I said 10,000 or 30,000
ft!) Kuryr's ultimate goal is to provide:
1) a container-oriented set of neutron plugins and drivers (you know the
ML2 driver, a l3 svc plugin, a lbaas driver, etc. etc.)
2) possibly (I'm not sure if that's the case) control plane elements
specifically designed to work with containers

In that respect, I tend to believe that there might be a good relationship
between the two projects, with Kolla providing containers for the control
plane elements that Kuryr wants to deploy. Probably Kuryr (which at the
moment is just a little more than the output of cookiecutter) does not want
to be in the business of building container, as well as (in my opinion)
Kolla does not want to be in the container networks business.

Salvatore

On 23 July 2015 at 18:35, Mohammad Banikazemi  wrote:

> I let the creators of the project speak for themselves but here is my take
> on project Kuryr.
>
> The goal is not to containerize Neutron or other OpenStack services. The
> main objective is to use Neutron as a networking backend option for Docker.
> The original proposal was to do so in the context of using containers (for
> different Neutron backends or vif types). While the main objective is
> fundamental to the project, the latter (use of containers in this
> particular way) seems to be a tactical choice we need to make. I see
> several different options available to achieve the same goal in this regard.
>
> Now, there is another aspect of using containers in the context of this
> project that is more interesting at least to me (and I do not know if
> others share this view or not) and that is the use of containers for
> providing network services that are not available through libnetwork as of
> now or in near future or ever. From the talks I have had with libnetwork
> developers the plan is to stay with the basic networking infrastructure and
> leave additional features to be developed by the community and to do so
> possibly by using what else, containers.
>
> So take the current features available in libnetwork. You mainly get
> support for connectivity/isolation for multiple networks across multiple
> hosts. Now if you want to route between these networks, you have to devise
> a solution yourself. One possible solution would be having a router service
> in a container that gets connected to say two Docker networks. Whether the
> router service is implemented with the use of the current Neutron router
> services or by some other solutions is something to look into and discuss
> but this is a direction where I think Kuryr (did I spell it right? ;)) can
> and should contribute to.
>
> Just my 2 cents on this topic.
>
> Best,
>
> Mohammad
>
>
> [image: Inactive hide details for "Steven Dake (stdake)" ---07/23/2015
> 11:34:09 AM---Gal, I’m not clear exactly what you plan to do wi]"Steven
> Dake (stdake)" ---07/23/2015 11:34:09 AM---Gal, I’m not clear exactly what
> you plan to do with regards to building docker containers for Neutro
>
> From: "Steven Dake (stdake)" 
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>, Eran Gampel <
> eran.gam...@toganetworks.com>, Antoni Segura Puimedon ,
> Irena Berezovsky , "gal.sa...@gmail.com" <
> gal.sa...@gmail.com>
> Date: 07/23/2015 11:34 AM
> Subject: Re: [openstack-dev] [Neutron][Kuryr][kolla] - Bringing Dockers
> networking to Neutron
> --
>
>
>
> Gal,
>
> I’m not clear exactly what you plan to do with regards to building docker
> containers for Neutron, but the Kolla project has developed both
> linuxbridge and ovs agents as well as a complete running Neutron system
> inside container technology. We can launch it AIO with docker-compose, or
> alternatively it can be launched AIO or multinode with Ansible. Note we
> have a complete OpenStack implementation, not just Neutron.
>
> We would welcome additional driver support using the standard OpenStack
> gerrit workflow.
>
>
> *https://github.com/stackforge/kolla/tree/master/docker/centos/binary/neutron*
> 
>
> Note we are also in the process of adding build from source to our tree
> here:
>
>
> *https://github.com/stackforge/kolla/tree/master/docker/centos/source/neutron*
> 
>
> For further background on Kolla, check out our wiki page:
>
> *https://wiki.openstack.org/wiki/Kolla*
> 
>
> Best wishes,
> -steve
>
> *From: *Gal Sagie <*gal.sa...@gmail.com* >
> *Reply-To: *"OpenStack Development Mailing List (not for usage
> questions)" <*openstack-dev@lists.openstack.org*
> >
> *Date: *Wednesday, July 22, 2015 at 9:28 AM
> *To: *"OpenStack Development Mailing List (not for usage questions)" <
> *openstack-dev@lists.openstack.org* >,
> Eran Gampel <*eran.gam...@toganetworks.com* >,
> Antoni Segur

Re: [openstack-dev] [Neutron][L3] Representing a networks connected by routers

2015-07-21 Thread Salvatore Orlando
A few comments inline.

Generally speaking the only thing I'd like to remark is that this use case
makes sense independently of whether you are using overlay, or any other
"SDN" solution (whatever SDN means to you).

Also, please note that this thread is now split in two - there's a new
branch starting with Ian's post. So perhaps let's make two threads.

On 21 July 2015 at 14:21, Neil Jerram  wrote:

> On 20/07/15 18:36, Carl Baldwin wrote:
> > I'm looking for feedback from anyone interest but, in particular, I'd
> > like feedback from the following people for varying perspectives:
> > Mark McClain (proposed alternate), John Belamaric (IPAM), Ryan Tidwell
> > (BGP), Neil Jerram (L3 networks), Aaron Rosen (help understand
> > multi-provider networks) and you if you're reading this list of names
> > and thinking "he forgot me!"
> >
> > We have been struggling to develop a way to model a network which is
> > composed of disjoint L2 networks connected by routers.  The intent of
> > this email is to describe the two proposals and request input on the
> > two in attempt to choose a direction forward.  But, first:
> > requirements.
> >
> > Requirements:
> >
> > The network should appear to end users as a single network choice.
> > They should not be burdened with choosing between segments.  It might
> > interest them that L2 communications may not work between instances on
> > this network but that is all.


It is however important to ensure services like DHCP keep working as usual.
Treating segments as logical networks in their own right is the simples
solution to achieve this imho.


> This has been requested by numerous
> > operators [1][4].  It can be useful for external networks and provider
> > networks.
> >
> > The model needs to be flexible enough to support two distinct types of
> > addresses:  1) address blocks which are statically bound to a single
> > segment and 2) address blocks which are mobile across segments using
> > some sort of dynamic routing capability like BGP or programmatically
> > injecting routes in to the infrastructure's routers with a plugin.
>
> FWIW, I hadn't previously realized (2) here.
>

A "mobile address block" translates to a subnet whose network association
might change.
Achieving mobile address block does not seem simple to me at all. Route
injection (booring) and BGP might solve the networking aspect of the
problem, but we'd need also coordination with the compute service to ensure
also all the workloads using addresses from the mobile block migrate;
unless I've not understood the way these mobile address blocks work, I
struggle to see this as a requirement.


>
> >
> > Overlay networks are not the answer to this.  The goal of this effort
> > is to scale very large networks with many connected ports by doing L3
> > routing (e.g. to the top of rack) instead of using a large continuous
> > L2 fabric.
>

As a side note, I find interesting that overlays where indeed proposed as a
solution to avoid hybrid L2/L3 networks or having to span VLANs across the
core and aggregation layers.


> Also, the operators interested in this work do not want
> > the complexity of overlay networks [4].
> >
> > Proposal 1:
> >
> > We refined this model [2] at the Neutron mid-cycle a couple of weeks
> > ago.  This proposal has already resonated reasonably with operators,
> > especially those from GoDaddy who attended the Neutron sprint.  Some
> > key parts of this proposal are:
> >
> > 1.  The routed super network is called a front network.  The segments
> > are called back(ing) networks.
> > 2.  Backing networks are modeled as admin-owned private provider
> > networks but otherwise are full-blown Neutron networks.
> > 3.  The front network is marked with a new provider type.
> > 4.  A Neutron router is created to link the backing networks with
> > internal ports.  It represents the collective routing ability of the
> > underlying infrastructure.
> > 5.  Backing networks are associated with a subset of hosts.
> > 6.  Ports created on the front network must have a host binding and
> > are actually created on a backing network when all is said and done.
> > They carry the ID of the backing network in the DB.
>

While the logical model and workflow you describe here makes sense, I have
the impression that:
1) The front network is not a neutron logical network. Because it does not
really behave like a network, with the only exception that you can pass its
id to the nova API. To reinforce this consider that basically the front
network has no ports.
2) from a topological perspective the front network "kind of" behaves like
an external network; but it isn't. The front network is not really a common
gateway for all backing networks, more like a label which is attached to
the router which interconnects all the backing networks.
3) more on topology. How can we know that all these segments will always be
connected by a single logical router? Using static router (or If one day
BGP will be a thing), it is alre

Re: [openstack-dev] [neutron] Should we document the using of "device:owner" of the PORT ?

2015-07-16 Thread Salvatore Orlando
It is not possible to constrain this attribute to an enum, because there is
no fixed list of device owners. Nevertheless it's good to document know
device owners.

Likewise the API layer should have checks in place to ensure accidental
updates to this attributes do not impact control plane functionality or at
least do not leave the system in an inconsistent state.

Salvatore


On 16 July 2015 at 07:51, Kevin Benton  wrote:

> I'm guessing Salvatore might just be suggesting that we restrict users
> from populating values that have special meaning (e.g. l3 agent router
> interface ports). I don't think at this point we could constrain the owner
> field to essentially an enum at this point.
>
> On Wed, Jul 15, 2015 at 10:22 PM, Mike Kolesnik 
> wrote:
>
>>
>> --
>>
>> Yes please.
>>
>> This would be a good starting point.
>> I also think that the ability of editing it, as well as the value it
>> could be set to, should be constrained.
>>
>> FYI the oVirt project uses this field to identify ports it creates and
>> manages.
>> So if you're going to constrain it to something, it should probably be
>> configurable so that managers other than Nova can continue to use Neutron.
>>
>>
>> As you have surely noticed, there are several code path which rely on an
>> appropriate value being set in this attribute.
>> This means a user can potentially trigger malfunctioning by sending PUT
>> requests to edit this attribute.
>>
>> Summarizing, I think that document its usage is a good starting point,
>> but I believe we should address the way this attribute is exposed at the
>> API layer as well.
>>
>> Salvatore
>>
>>
>>
>> On 13 July 2015 at 11:52, Wang, Yalei  wrote:
>>
>>> Hi all,
>>> The device:owner the port is defined as a 255 byte string, and is widely
>>> used now, indicating the use of the port.
>>> Seems we can fill it freely, and user also could update/set it from cmd
>>> line(port-update $PORT_ID --device_owner), and I don’t find the guideline
>>> for using.
>>>
>>> What is its function? For indicating the using of the port, and seems
>>> horizon also use it to show the topology.
>>> And nova really need it editable, should we at least document all of the
>>> possible values into some guide to make it clear? If yes, I can do it.
>>>
>>> I got these using from the code(maybe not complete, pls point it out):
>>>
>>> From constants.py,
>>> DEVICE_OWNER_ROUTER_HA_INTF = "network:router_ha_interface"
>>> DEVICE_OWNER_ROUTER_INTF = "network:router_interface"
>>> DEVICE_OWNER_ROUTER_GW = "network:router_gateway"
>>> DEVICE_OWNER_FLOATINGIP = "network:floatingip"
>>> DEVICE_OWNER_DHCP = "network:dhcp"
>>> DEVICE_OWNER_DVR_INTERFACE = "network:router_interface_distributed"
>>> DEVICE_OWNER_AGENT_GW = "network:floatingip_agent_gateway"
>>> DEVICE_OWNER_ROUTER_SNAT = "network:router_centralized_snat"
>>> DEVICE_OWNER_LOADBALANCER = "neutron:LOADBALANCER"
>>>
>>> And from debug_agent.py
>>> DEVICE_OWNER_NETWORK_PROBE = 'network:probe'
>>> DEVICE_OWNER_COMPUTE_PROBE = 'compute:probe'
>>>
>>> And setting from nova/network/neutronv2/api.py,
>>> 'compute:%s' % instance.availability_zone
>>>
>>>
>>> Thanks all!
>>> /Yalei
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Kevin Benton
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][db] online-schema-migrations patch landed

2015-07-15 Thread Salvatore Orlando
Do you reckon that the process that led to creating a migration like [1]
should also be documented in devref?
That might be helplful for developers, unless that process is already
documented elsewhere.

Salvatore


[1] https://review.openstack.org/#/c/202013/1

On 15 July 2015 at 15:54, Mike Bayer  wrote:

>
>
> On 7/15/15 9:26 AM, Ihar Hrachyshka wrote:
>
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA256
>>
>> Hi all,
>>
>> since it's a high impact change in the migration tree, I wanted to
>> drop an email to everyone affected (basically, anyone who wants to
>> introduce a new migration from now on).
>>
>> So there was a proposal to split migration rules into independent
>> branches, with one 'expand' branch containing only those rules that
>> are safe to apply while neutron-server is running. Proposal is at:
>>
>> http://specs.openstack.org/openstack/neutron-specs/specs/liberty/online-
>> schema-migrations.html
>>
>> And the first patch to implement it just landed in neutron:
>>
>> http://git.openstack.org/cgit/openstack/neutron/commit/?id=c7acfbabdc13e
>> d2a73bdbc6275af8063b8c1eb2f
>>
>> - From now on,
>>
>> - - there are multiple alembic heads for any database state;
>> - - there is a new file structure under
>> neutron/db/migration/versions/alembic_versions/{cycle}_{branch};
>> - - you may need to split your migrations into pieces (for expand and
>> contract branches, respectively, depending on the character of schema
>> changes; more details in the spec);
>> - - 'neutron-db-manage upgrade head' still applies all heads;
>> - - I'd like to rearrange migration trees for *aas repos in the same
>> way, though neutron-db-manage still supports the old file layout.
>>
>> To get an example of how the split would look like for existing
>> migration rules in review, I took Kevin's patch for RBAC:
>>
>> https://review.openstack.org/191707
>>
>> And transformed it into something that adopts the new file layout:
>>
>> https://review.openstack.org/202013
>>
>> Changes I made:
>> - - split migration script into two pieces;
>> - - updated HEADS file;
>> - - made the contract phase script depends_on the expand one;
>>
>> Note that 'neutron-db-manage revision --autogenerate' command does not
>> yet filter operations into corresponding branches, though we would
>> like to have it in L once new alembic is released.
>>
>
> This API is in master and will be the focus of the 0.8 release. This is a
> major refactor so I'm still working out backwards-compatibility stuff as
> well as getting some more pluggability into autogenerate while we're at
> it.   The documentation for the specific aspect of "filtering operations
> during autogenerate" is up at
> http://alembic.readthedocs.org/en/latest/api/autogenerate.html#customizing-revision-generation
> .
>
>
>
>
>> Ihar
>> -BEGIN PGP SIGNATURE-
>> Version: GnuPG v2
>>
>> iQEcBAEBCAAGBQJVpl92AAoJEC5aWaUY1u57EHoIALn4Q+k46liBJeto/pVZ+/Yd
>> PYOJuuAV8jIrC1Xrg+70HDJ2W3TeioYAy+XqNLQ178P7cq2Gbn9xKOlzm8tuojtl
>> dwc2cmtS443YI1IGe6Vcv9uQdYQ3qtdkuruGoaxGvIb7oRCZ9QF9qLdJELw4hG6z
>> 8B2TSrpJ6aduudmkO+DUw9rcmyG6SNAEuXSdLPEkz9oIaVPvNODHA5D8VSN0xmNY
>> kHRNFfXcdsLZ3IWqu/xsgIbujLBPcblgdl8Oofw4GaMA271sdGMPUgPl07nAnJqa
>> WoyOER9VQz8DqnLpXOq36oZpmCrFc+Uk7SVbvyB4nZgB0OMkvQHdtzB/Tw2yCc8=
>> =3nWE
>> -END PGP SIGNATURE-
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][nova] proxy quota/limits info from neutron

2015-07-15 Thread Salvatore Orlando
Some comments inline.

Salvatore

On 15 July 2015 at 10:24, Alex Xu  wrote:

>
>
> 2015-07-15 5:14 GMT+08:00 Matt Riedemann :
>
>>
>>
>> On 7/14/2015 3:43 PM, Cale Rath wrote:
>>
>>> Hi,
>>>
>>> I created a patch to fail on the proxy call to Neutron for used limits,
>>> found here: https://review.openstack.org/#/c/199604/
>>>
>>> This patch was done because of this:
>>>
>>> http://docs.openstack.org/developer/nova/project_scope.html?highlight=proxy#no-more-api-proxies
>>> ,
>>> where it’s stated that Nova shouldn’t be proxying API calls.
>>>
>>> That said, Matt Riedemann brings up the point that this breaks the case
>>> where Neutron is installed and we want to be more graceful, rather than
>>> just raising an exception.  Here are some options:
>>>
>>> 1. fail - (the code in the patch above)
>>> 2. proxy to neutron for floating ips and security groups - that's what
>>> the original change was doing back in havana
>>> 3. return -1 or something for floatingips/security groups to indicate
>>> that we don't know, you have to get those from neutron
>>>
>>> Does anybody have an opinion on which option we should do regarding API
>>> proxies in this case?
>>>
>>> Thanks,
>>>
>>> Cale Rath
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> I prefer the proxy option, despite that we don't want to do more proxies
>> to other services, it's the least of all evils here in my opinion.
>>
>> I don't think we can do #1, that breaks anyone using those APIs and is
>> using Neutron, so it's a non-starter.
>>
>
> agree
>
>
>>
>> #3 is an API change in semantics which would at least be a microversion
>> and is kind of clunky.
>>
>
> agree too~
>

Also it overlaps with Neutron semantics of returning -1 for "unlimited" and
it could be misinterpreted.

>
>
>>
>> For #2 we at least have the nova.network.base_api which we didn't have in
>> Havana when I was originally working on this, that would abstract the
>> neutron-specific cruft out of the nova-api code.  The calls to neutron were
>> pretty simple from what I remember - we could just resurrect the old patch:
>>
>> https://review.openstack.org/#/c/43822/
>
>
> +1, but looks like this need new microversion also. It means after 2.x
> version, this api value is valid for neutron, before 2.x version, don't
> trust this api...
>

This is correct, and makes sense in my opinion.
Still, I agree more that the final goal should be to stop proxying this
calls.
#2 is in my opinion a good strategy for transitioning to #1. I am not sure
whether it is acceptable to just document that retrieving limits in Nova
for resources managed by other projects is deprecated and will not be
allowed anymore in M or N.


>
>
>>
>>
>> Another option is #4, we mark the bug as won't fix and we log a warning
>> if neutron is configured saying some of the resources aren't going to be
>> correct, use the neutron API to get information for quotas on security
>> groups, floating IPs, etc.  That's also kind of gross IMO, but it's an
>> option.
>
>
> if we plan to deprecate network proxy api in no longer future, this is
> easy option.
>

I am not sure this is a good option. The warning in this case should be
returned to the user making the limits request; logging it just tells the
operator somebody has retrieved limits using a proxy.


>
>>
>>
>> --
>>
>> Thanks,
>>
>> Matt Riedemann
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Should we document the using of "device:owner" of the PORT ?

2015-07-14 Thread Salvatore Orlando
Yes please.

This would be a good starting point.
I also think that the ability of editing it, as well as the value it could
be set to, should be constrained.

As you have surely noticed, there are several code path which rely on an
appropriate value being set in this attribute.
This means a user can potentially trigger malfunctioning by sending PUT
requests to edit this attribute.

Summarizing, I think that document its usage is a good starting point, but
I believe we should address the way this attribute is exposed at the API
layer as well.

Salvatore



On 13 July 2015 at 11:52, Wang, Yalei  wrote:

>  Hi all,
>
> The device:owner the port is defined as a 255 byte string, and is widely
> used now, indicating the use of the port.
> Seems we can fill it freely, and user also could update/set it from cmd
> line(port-update $PORT_ID --device_owner), and I don’t find the guideline
> for using.
>
> What is its function? For indicating the using of the port, and seems
> horizon also use it to show the topology.
> And nova really need it editable, should we at least document all of the
> possible values into some guide to make it clear? If yes, I can do it.
>
>
> I got these using from the code(maybe not complete, pls point it out):
>
> From constants.py,
> DEVICE_OWNER_ROUTER_HA_INTF = "network:router_ha_interface"
> DEVICE_OWNER_ROUTER_INTF = "network:router_interface"
> DEVICE_OWNER_ROUTER_GW = "network:router_gateway"
> DEVICE_OWNER_FLOATINGIP = "network:floatingip"
> DEVICE_OWNER_DHCP = "network:dhcp"
> DEVICE_OWNER_DVR_INTERFACE = "network:router_interface_distributed"
> DEVICE_OWNER_AGENT_GW = "network:floatingip_agent_gateway"
> DEVICE_OWNER_ROUTER_SNAT = "network:router_centralized_snat"
> DEVICE_OWNER_LOADBALANCER = "neutron:LOADBALANCER"
>
> And from debug_agent.py
> DEVICE_OWNER_NETWORK_PROBE = 'network:probe'
> DEVICE_OWNER_COMPUTE_PROBE = 'compute:probe'
>
> And setting from nova/network/neutronv2/api.py,
> 'compute:%s' % instance.availability_zone
>
>
> Thanks all!
>
> /Yalei
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][upgrades] Potential issues when performing Neutron upgrades

2015-07-13 Thread Salvatore Orlando
Some pedant comments inline.

Salvatore

On 13 July 2015 at 23:29, Russell Bryant  wrote:

> On 07/13/2015 05:08 PM, Kevin Benton wrote:
> > Thanks for the info. So the equivalent in neutron would be if we just
> > ensure backward compatible AMQP APIs, right?
>
> There's a few parts:
>
> 1) Backwards compatibility with changes to the oslo.messaging APIs using
> API versioning (what you're referring to, I think).  Neutron does this
> (though not tested in a mixed version mid-upgrade environment yet).
>
> 2) Compatibility of the data sent over those interfaces.  This is where
> oslo.versionedobjects comes in.  Breakage here is much easier to miss
> since it's not always obvious when you're modifying a data structure
> that's sent over the wire.  There has been a ton of work in Nova to
> version the data sent over the wire and have the ability for a service
> (nova-conductor in nova's case) to be able to convert objects back to a
> version that an older service can understand.  This is the most likely
> way Neutron will break rolling upgrades right now, especially since it's
> not tested.
>

It is worth noting that versioned objects are helpful in any circumstance
where you have a versioned RPC API be it AMQP or REST or whatever.
Neutron now completely lacks a layer between the front-end API endpoint and
the plugin, which then manages DB access.
The now pretty much defunct "perestroika" blueprint aimed to do this; These
versioned objects would live in this layer, which older folks like me who
studied software engineering in late '90s would call "Business logic layer".

But this discussion is really out of scope for this thread, so I'll stop
here.


> 3) DB schema.  Depending on what services access the db directly and
> what the rolling upgrade strategy is, there may be some additional
> constraints on making sure the db schema is backwards copmatible, too.
>

I guess if one properly uses object persistency so that DB access can be
entirely performed via API objects, then #2 should imply #3 (and possibly
even hide backward incompatible DB schema changes).


>
> --
> Russell Bryant
>
> > On Mon, Jul 13, 2015 at 7:33 AM, Russell Bryant  > > wrote:
> >
> > On 07/13/2015 04:09 AM, Kevin Benton wrote:
> > >>because you won't have to run Neutron agents on compute nodes
> anymore.
> > >
> > > How will upgrades work for OVN?
> >
> > We haven't written anything down yet, but here's what I expect.
> >
> > Right now we're still changing the db schema however is needed
> without
> > messing with versioning.  As we get to "production ready", I expect
> > we'll start being strict about only making backwards compatible ovsdb
> > schema changes to make upgrades easier.
> >
> > There are 2 central components - ovn-northd and ovsdb-server - that
> > would be upgraded first, which I would expect to be done at the same
> > time as upgrading your Neutron control plane.  As long as any ovsdb
> > schema changes are backwards compatible, you could do
> rolling-upgrades
> > of ovn-controller on compute or network nodes.
> >
> > --
> > Russell Bryant
> >
> >
>  __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > <
> http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> > --
> > Kevin Benton
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> --
> Russell Bryant
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Issue with pymysql

2015-07-08 Thread Salvatore Orlando
I agree and I would make the switch as soon as possible. The graphite graph
you posted showed that since 6/28 the difference in failure rate is such
that isn't even statistically significant. However, spikes in failure rates
of the unstable job also suggest that you're starting to chase a moving
target, and we know how painful this is from the experience we had when
enabling the neutron full job.

Salvatore



On 8 July 2015 at 20:21, Armando M.  wrote:

> Hi,
>
> Another brief update on the matter:
>
> Failure rate trends [1] are showing that unstable (w/ multiple API workers
> + pymysql driver) and stable configurations (w/o) are virtually aligned and
> I am proposing that it is time to drop the unstable infra configuration
> [2,3] that allowed the team to triage/experiment and get to a solution. I'll
> watch [1] a little longer before I think it's safe to claim that we're out
> of the woods.
>
> Cheers,
> Armando
>
> [1] http://goo.gl/YM7gUC
> [2] https://review.openstack.org/#/c/199668/
> [3] https://review.openstack.org/#/c/199672/
>
> On 22 June 2015 at 14:10, Armando M.  wrote:
>
>> Hi,
>>
>> A brief update on the issue that sparked this thread:
>>
>> A little over a week ago, bug [1] was filed. The gist of that was that
>> the switch to pymysql unveiled a number of latent race conditions that made
>> Neutron unstable.
>>
>> To try and nip these in the bud, the Neutron team filed a number of
>> patches [2], to create an unstable configuration that would allow them to
>> troubleshoot and experiment a solution, by still keeping the stability in
>> check (a preliminary proposal for a fix has been available in [4]).
>>
>> The latest failure rate trend is shown in [3]; as you can see, we're
>> still gathering data, but it seems that the instability gap between the two
>> jobs (stable vs unstable) has widened, and should give us plenty of data
>> points to devise a resolution strategy.
>>
>> I have documented the most recurrent traces in the bug report [1].
>>
>> Will update once we managed to get the two curves to kiss each other
>> again and close to a more acceptable failure rate.
>>
>> Cheers,
>> Armando
>>
>> [1] https://bugs.launchpad.net/neutron/+bug/1464612
>> [2] https://review.openstack.org/#/q/topic:neutron-unstable,n,z
>> [3] http://goo.gl/YM7gUC
>> [4] https://review.openstack.org/#/c/191540/
>>
>>
>> On 12 June 2015 at 11:13, Boris Pavlovic  wrote:
>>
>>> Sean,
>>>
>>> Thanks for quick fix/revert https://review.openstack.org/#/c/191010/
>>> This unblocked Rally gates...
>>>
>>> Best regards,
>>> Boris Pavlovic
>>>
>>> On Fri, Jun 12, 2015 at 8:56 PM, Clint Byrum  wrote:
>>>
 Excerpts from Mike Bayer's message of 2015-06-12 09:42:42 -0700:
 >
 > On 6/12/15 11:37 AM, Mike Bayer wrote:
 > >
 > >
 > > On 6/11/15 9:32 PM, Eugene Nikanorov wrote:
 > >> Hi neutrons,
 > >>
 > >> I'd like to draw your attention to an issue discovered by rally
 gate job:
 > >>
 http://logs.openstack.org/96/190796/4/check/gate-rally-dsvm-neutron-rally/7a18e43/logs/screen-q-svc.txt.gz?level=TRACE
 > >>
 > >> I don't have bandwidth to take a deep look at it, but first
 > >> impression is that it is some issue with nested transaction support
 > >> either on sqlalchemy or pymysql side.
 > >> Also, besides errors with nested transactions, there are a lot of
 > >> Lock wait timeouts.
 > >>
 > >> I think it makes sense to start with reverting the patch that moves
 > >> to pymysql.
 > > My immediate reaction is that this is perhaps a concurrency-related
 > > issue; because PyMySQL is pure python and allows for full blown
 > > eventlet monkeypatching, I wonder if somehow the same PyMySQL
 > > connection is being used in multiple contexts. E.g. one greenlet
 > > starts up a savepoint, using identifier "_3" which is based on a
 > > counter that is local to the SQLAlchemy Connection, but then another
 > > greenlet shares that PyMySQL connection somehow with another
 > > SQLAlchemy Connection that uses the same identifier.
 >
 > reading more of the log, it seems the main issue is just that there's
 a
 > deadlock on inserting into the securitygroups table.  The deadlock on
 > insert can be because of an index being locked.
 >
 >
 > I'd be curious to know how many greenlets are running concurrently
 here,
 > and what the overall transaction looks like within the operation that
 is
 > failing here (e.g. does each transaction insert multiple rows into
 > securitygroups?  that would make a deadlock seem more likely).

 This begs two questions:

 1) Are we handling deadlocks with retries? It's important that we do
 that to be defensive.

 2) Are we being careful to sort the table order in any multi-table
 transactions so that we minimize the chance of deadlocks happening
 because of any cross table deadlocks?


 _

Re: [openstack-dev] [neutron] Plethora of dbase migration questions...

2015-07-07 Thread Salvatore Orlando
possibly I was wrong in mixing up git & alembic.
It should be "upgrade head" - lowercase.

If that doesn't work there might some other issue lurking.

Salvatore

On 7 July 2015 at 17:44, Paul Michali  wrote:

> Salvatore,
>
> I changed head to the version before my new one, and then tried to upgrade
> and I see this:
>  neutron-db-manage --config-file /opt/stack/neutron/etc/neutron.conf
> --service vpnaas upgrade HEAD
> Traceback (most recent call last):
>   File "/usr/local/bin/neutron-db-manage", line 10, in 
> sys.exit(main())
>   File "/opt/stack/neutron/neutron/db/migration/cli.py", line 238, in main
> CONF.command.func(config, CONF.command.name)
>   File "/opt/stack/neutron/neutron/db/migration/cli.py", line 105, in
> do_upgrade
> run_sanity_checks(config, revision)
>   File "/opt/stack/neutron/neutron/db/migration/cli.py", line 229, in
> run_sanity_checks
> script_dir.run_env()
>   File "/usr/local/lib/python2.7/dist-packages/alembic/script.py", line
> 390, in run_env
> util.load_python_file(self.dir, 'env.py')
>   File "/usr/local/lib/python2.7/dist-packages/alembic/util.py", line 243,
> in load_python_file
> module = load_module_py(module_id, path)
>   File "/usr/local/lib/python2.7/dist-packages/alembic/compat.py", line
> 79, in load_module_py
> mod = imp.load_source(module_id, path, fp)
>   File
> "/opt/stack/neutron-vpnaas/neutron_vpnaas/db/migration/alembic_migrations/env.py",
> line 86, in 
> run_migrations_online()
>   File
> "/opt/stack/neutron-vpnaas/neutron_vpnaas/db/migration/alembic_migrations/env.py",
> line 67, in run_migrations_online
> engine = session.create_engine(neutron_config.database.connection)
>   File
> "/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/engines.py",
> line 112, in create_engine
> url = sqlalchemy.engine.url.make_url(sql_connection)
>   File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/url.py",
> line 186, in make_url
> return _parse_rfc1738_args(name_or_url)
>   File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/url.py",
> line 235, in _parse_rfc1738_args
> "Could not parse rfc1738 URL from string '%s'" % name)
> sqlalchemy.exc.ArgumentError: Could not parse rfc1738 URL from string ''
>
> Any ideas what is wrong here?
>
> On Tue, Jul 7, 2015 at 10:05 AM Paul Michali  wrote:
>
>>
>> Yes, I wasn't using the --service option, so I suspect that is why my
>> down_version was wrong.  In talking with Akihiro, I added a check to PEP8
>> and made sure that it fails if head is wrong. It is:
>> https://review.openstack.org/#/c/199082/ (of course that failed py27 -
>> I've got to see if there was some recent breakage in vpn repo, again).
>>
>> Regarding the migration, one of the new columns may be None, but there
>> must be at least one IP version entry (there is an existing test in VPN for
>> using a router w/o an external IP set). Since the new code will rely on
>> these new fields, I'd like to populate them as part of the migration. I
>> think it would be more complicated to handle during operation.
>>
>> Does anyone have examples of how to do queries of objects, from the
>> migration upgrade() code?
>>
>>
>> Regards,
>>
>> PCM
>>
>> On Tue, Jul 7, 2015 at 9:02 AM Akihiro Motoki  wrote:
>>
>>> 2015-07-07 21:39 GMT+09:00 Henry Gessau :
>>>
>>>>  On Tue, Jul 07, 2015, Paul Michali  
>>>> wrote:
>>>>
>>>> Thanks Salvatore for the responses. See @PCM in-line...
>>>>
>>>>
>>>>
>>>>  On Tue, Jul 7, 2015 at 6:14 AM Salvatore Orlando 
>>>> wrote:
>>>>
>>>>> Some comments inline.
>>>>>
>>>>>  Salvatore
>>>>>
>>>>>On 6 July 2015 at 20:00, Paul Michali < 
>>>>> p...@michali.net> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>>  I have some urgent requests about migration that I'm hoping to get
>>>>>> some info on. I'm working on a bug where I need to add two (related) 
>>>>>> fields
>>>>>> to a table for VPNaaS. Here's the objectives related to migration...
>>>>>>
>>>>>>  1) create local_v4_ip and lcoal_v6_ip fields in the vpnservice table
>>>>>> 2) for each entry in the vpnservice table:
>>>>>&g

Re: [openstack-dev] [neutron] Plethora of dbase migration questions...

2015-07-07 Thread Salvatore Orlando
On 7 July 2015 at 14:00, Paul Michali  wrote:

> Thanks Salvatore for the responses. See @PCM in-line...
>
>
>
> On Tue, Jul 7, 2015 at 6:14 AM Salvatore Orlando 
> wrote:
>
>> Some comments inline.
>>
>> Salvatore
>>
>> On 6 July 2015 at 20:00, Paul Michali  wrote:
>>
>>> Hi,
>>>
>>> I have some urgent requests about migration that I'm hoping to get some
>>> info on. I'm working on a bug where I need to add two (related) fields to a
>>> table for VPNaaS. Here's the objectives related to migration...
>>>
>>> 1) create local_v4_ip and lcoal_v6_ip fields in the vpnservice table
>>> 2) for each entry in the vpnservice table:
>>> 2.1) Get the router.gw_port.fixed_ips list
>>> 2.2) Determine the version of each fixed IP and store the first of
>>> each version (if any) into the appropriate new field.
>>>
>>> I have created a migration file, and I changed the down_revision to be
>>> the number of the revision that is the first in the migration chain in the
>>> VPN repo.
>>>
>>> Here are the many questions I have...
>>>
>>> When I look in the VPN repo, the HEAD file has the version 'kilo', which
>>> is not the current head.
>>>
>>
>>> Shouldn't it the version number of the first file in the migration chain?
>>>
>>
>> It should indeed. How are you generating the revision script? Using
>> neutron-db-manage it should be updated automatically [1]
>>
>
> @PCM I ran neutron-db-manage, when in the neutron repo, and it assigned
> some version, but it was not the latest in the neutron-vpnaas repo.
>

when you create a revision Alembic automatically assigns it a unique id.
However, the neutron migration CLI (neutron-db-manage) then should take
care of updating the HEAD file automatically. If this is not happening,
that's where the problem should be.


> I checked the VPN repo and there were a chain of versions, which I used to
> determine what the head should be and have set the version accordingly.
> However, in the current repo, head is set to "kilo", which appears to be
> incorrect.  The versions are:
>
> 5689aa52
> kilo   <<< HEAD
> 3ea02b2a773e
> start_neutron_vpnaas
> None
>
> Should I do a separate commit that fixes the HEAD file, or just fix it as
> part of the bug fix I'm working on.
>

In order to pass functional tests the HEAD file must point to the topmost
revision (5689aa52)


> BTW, at one point, after having correctly set the HEAD and versions in my
> new migration file, I think I ran neutron-db-manage check_migration, and I
> think it set the HEAD to my version, but it did that in the neutron repo,
> and not the VPN repo.  I might have been running from the wrong repo?
>

Yes, probably.
neutron-db-manage by default works on the neutron repo. In order to work
with a service repo you should specify it on the command line (
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/db/migration/cli.py#n41
).
This might also explain why the HEAD is not getting updated in your repo.


>
>
>
>> For my commit, I'm assuming I change the HEAD file to use my migration
>>> file's version?
>>>
>>
>> You can do that manually too, yes.
>>
>>
>>>
>>> I set HEAD to my migration file, and my file has a down revision of the
>>> previous head's revision. If I run 'neutron-db-manage --config-file
>>> ../neutron/etc/neutron.conf --config-file
>>> ../neutron/etc/neutron/plugins/ml2/ml2_conf.ini check_migration' there is
>>> no output so I guess that is OK.
>>>
>>> As I develop my new migration file, is there a way that I can test it
>>> (running neutron-db-migration, maybe)?
>>>
>>
>> When I test migrations I usually dump the database, run the migration
>> with neutron-db-manage upgrade HEAD (I think it's not necessary to specify
>> HEAD), and restore the db from the dump if the migration fails.
>>
>>
>>> Is there a way to run the migration file under the debugger, as well
>>> (importing pdb, for example)?
>>>
>>
>> The migration process is just like any python application, so I guess you
>> can debug it with pdb.
>>
>
> @PCM Ah, so use "neutron-db-manage upgrade HEAD". That was the piece that
> was missing. I take it there are no specific unit tests of the migration
> files?
>

... and also specify --service vpnaas

>
>
>
>>
>>>
>>> In the migration, I can add the columns needed

Re: [openstack-dev] [neutron] Plethora of dbase migration questions...

2015-07-07 Thread Salvatore Orlando
Some comments inline.

Salvatore

On 6 July 2015 at 20:00, Paul Michali  wrote:

> Hi,
>
> I have some urgent requests about migration that I'm hoping to get some
> info on. I'm working on a bug where I need to add two (related) fields to a
> table for VPNaaS. Here's the objectives related to migration...
>
> 1) create local_v4_ip and lcoal_v6_ip fields in the vpnservice table
> 2) for each entry in the vpnservice table:
> 2.1) Get the router.gw_port.fixed_ips list
> 2.2) Determine the version of each fixed IP and store the first of
> each version (if any) into the appropriate new field.
>
> I have created a migration file, and I changed the down_revision to be the
> number of the revision that is the first in the migration chain in the VPN
> repo.
>
> Here are the many questions I have...
>
> When I look in the VPN repo, the HEAD file has the version 'kilo', which
> is not the current head.
>

> Shouldn't it the version number of the first file in the migration chain?
>

It should indeed. How are you generating the revision script? Using
neutron-db-manage it should be updated automatically [1]

For my commit, I'm assuming I change the HEAD file to use my migration
> file's version?
>

You can do that manually too, yes.


>
> I set HEAD to my migration file, and my file has a down revision of the
> previous head's revision. If I run 'neutron-db-manage --config-file
> ../neutron/etc/neutron.conf --config-file
> ../neutron/etc/neutron/plugins/ml2/ml2_conf.ini check_migration' there is
> no output so I guess that is OK.
>
> As I develop my new migration file, is there a way that I can test it
> (running neutron-db-migration, maybe)?
>

When I test migrations I usually dump the database, run the migration with
neutron-db-manage upgrade HEAD (I think it's not necessary to specify
HEAD), and restore the db from the dump if the migration fails.


> Is there a way to run the migration file under the debugger, as well
> (importing pdb, for example)?
>

The migration process is just like any python application, so I guess you
can debug it with pdb.


>
> In the migration, I can add the columns needed. What's the best way to
> fill out those fields - using raw SQL queries or create a Session object
> and access the VpnService object's router object?
>

If the default value for the column is not enough, and you need to specify
a value which depends on other values in the same row I would prefer plain
SQL statements, but if that become cumbersome I guess it's ok to use
sqlalchemy's session.


> I see there is some op.bind() call and then engine.execute(), but could
> use some help on the best way to extract the needed queries (I need to
> access the vpnservice's router, and then access the (Port) gw_port
> relationship, and from that access the (IPAllocation) fixed_ips list).
>

Perhaps you can point us to the review pages on gerrit, and we can provide
detailed comments there.


>
> Appreciate any advise here on how to debug the migration stuff...
>
> Paul Michali (pc_m)
>

[1]
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/db/migration/cli.py#n124


>
>
>
>
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] new RFE on quota enforcement

2015-06-25 Thread Salvatore Orlando
Hi,

since  the quota enforcement patches for the 'better-quotas' blueprint did
not merge by liberty-1, and I forgot to resubmit the already-approved kilo
spec [1], I have submitted a RFE to comply with the process agreed for
Liberty [2].

As the policy [3] does not explicitly state that the submitter of a RFE
cannot campaign for it, I encourage you to have a look at [2] and vote for
it if you find it something useful. Besides, there already patches in
review for it [4], and among them a hopefully useful devref patch [5]
(which maybe doubles as a spec).

Thanks for your time,
Salvatore

[1]
http://specs.openstack.org/openstack/neutron-specs/specs/kilo-backlog/better-quotas.html
[2] https://bugs.launchpad.net/vmware-nsx/+bug/1468934
[3]
http://git.openstack.org/cgit/openstack/neutron/tree/doc/source/policies/blueprints.rst
[4]
https://review.openstack.org/#/q/status:open+project:openstack/neutron+branch:master+topic:bp/better-quotas,n,z
[5] https://review.openstack.org/190798
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-infra] [neutron] Third Party CI Voting

2015-06-25 Thread Salvatore Orlando
Edgar,

in a nutshell my point is that if we want to remove voting rights from
every CI I'm fine with it.
However, I think what's being discussed in this thread is already captured
very well by [1] and believe the policy it outlines is perfectly fine for
Neutron purposes.

Salvatore

[1]
http://git.openstack.org/cgit/openstack/neutron/tree/doc/source/policies/thirdparty-ci.rst

On 25 June 2015 at 17:08, Edgar Magana  wrote:

>   Thank for your response Salvatore. I am not sure what is your position
> in this topic? Are you fine removing voting rights to all Cis?
>
>  Edgar
>
>   From: Salvatore Orlando
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> Date: Thursday, June 25, 2015 at 7:59 AM
> To: "OpenStack Development Mailing List (not for usage questions)"
> Subject: Re: [openstack-dev] [openstack-infra] [neutron] Third Party CI
> Voting
>
>
>
> On 25 June 2015 at 16:08, John Davidge (jodavidg) 
> wrote:
>
>>  Hi all,
>>
>>  Recent neutron third party CI issues have got me thinking again about a
>> topic which we discussed in Vancouver:
>>
>>  Should any Third Party CI have voting rights for neutron patches in
>> gerrit?
>>
>
>  Why should this be a decision for Neutron only?
>
>
>>
>>  I’d like to suggest that they shouldn’t.
>>
>>  A -1 from a third party CI tool can often be an indication that the CI
>> tool itself or the third party plugin is broken, rather than there being
>> issues with the patch under review. I don’t think there are many cases
>> where a third party CI tool has caught a genuine issue that Jenkins has
>> missed. With the current voting rights these CI tools cause a lot of noise
>> when they experience problems.
>>
>
>  As far as I am aware no 3rd party CI tool has a better coverage than the
> upstream one.
> some 3rd party CIs exercise different code paths and might uncover some
> issue that the upstream CI did not cover. There will surely be people
> claiming this has happened a lot of times, and even a single issue found is
> invaluable; I would agree with that, but I also think that a 3rd party CI
> does not have to vote to be useful.
>
>>
>>  I’m not suggesting that the results of these tests be removed from the
>> page altogether - there are some cases where their results are useful to
>> the patch author/reviewer - but removing voting rights (or at least -1
>> rights) would save a patch from a –1 that might not be particularly
>> meaningful.
>>
>
>  Frankly I find the overwhelming number of CI messages - and email
> notifications even more annoying that random -1s. Thankfully you can hide
> the formers and filter out the latters.
> From the perspective of 3rd party CI maintainer I could use myself as an
> example; I maintain a CI which has now been broken for about 48 hours. I am
> busy with other tasks and cannot look at it now. I might be a terrible
> person for this, but that's my problem. If the CI was not voting at least I
> would not have annoyed people. (fwiw, I've disabled my CI now).
>
>  Also, I believe we already agreed that a working CI is not anymore a
> requirement, as long as the plugin/driver maintainers can provide a
> reasonable proof that their integration works?
>
>  Salvatore
>
>
>>
>>  Thoughts?
>>
>>  John
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-infra] [neutron] Third Party CI Voting

2015-06-25 Thread Salvatore Orlando
On 25 June 2015 at 16:08, John Davidge (jodavidg) 
wrote:

>  Hi all,
>
>  Recent neutron third party CI issues have got me thinking again about a
> topic which we discussed in Vancouver:
>
>  Should any Third Party CI have voting rights for neutron patches in
> gerrit?
>

Why should this be a decision for Neutron only?


>
>  I’d like to suggest that they shouldn’t.
>
>  A -1 from a third party CI tool can often be an indication that the CI
> tool itself or the third party plugin is broken, rather than there being
> issues with the patch under review. I don’t think there are many cases
> where a third party CI tool has caught a genuine issue that Jenkins has
> missed. With the current voting rights these CI tools cause a lot of noise
> when they experience problems.
>

As far as I am aware no 3rd party CI tool has a better coverage than the
upstream one.
some 3rd party CIs exercise different code paths and might uncover some
issue that the upstream CI did not cover. There will surely be people
claiming this has happened a lot of times, and even a single issue found is
invaluable; I would agree with that, but I also think that a 3rd party CI
does not have to vote to be useful.

>
>  I’m not suggesting that the results of these tests be removed from the
> page altogether - there are some cases where their results are useful to
> the patch author/reviewer - but removing voting rights (or at least -1
> rights) would save a patch from a –1 that might not be particularly
> meaningful.
>

Frankly I find the overwhelming number of CI messages - and email
notifications even more annoying that random -1s. Thankfully you can hide
the formers and filter out the latters.
>From the perspective of 3rd party CI maintainer I could use myself as an
example; I maintain a CI which has now been broken for about 48 hours. I am
busy with other tasks and cannot look at it now. I might be a terrible
person for this, but that's my problem. If the CI was not voting at least I
would not have annoyed people. (fwiw, I've disabled my CI now).

Also, I believe we already agreed that a working CI is not anymore a
requirement, as long as the plugin/driver maintainers can provide a
reasonable proof that their integration works?

Salvatore


>
>  Thoughts?
>
>  John
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] M Naming Poll ended - results still need to clear legal

2015-06-22 Thread Salvatore Orlando
Anyway, if you want to print t-shirts once legal is cleared, here's a
vintage football idea [1].
Little and pointless trivia fact: Como calcio was sponsored for a few years
in the 80s by Mita copiers - now known as Kyocera.

Salvatore

[1]
http://www.calciocomo1907.it/images/news/thumbnails/Mattei%20Luca%20%201986-87_1000x700.JPG

On 22 June 2015 at 20:25, Clint Byrum  wrote:

> Excerpts from Clay Gerrard's message of 2015-06-22 10:30:49 -0700:
> > http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_4983776e190c8dbc
> >
> > how is the top pick not the author of the book of five rings [1]
> >
>
> Agreed. I don't think people fully appreciated the reputation of Mr.
> Musashi. ;)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Modular L2 Agent

2015-06-22 Thread Salvatore Orlando
I see Kyle's point that this is not something in-scope for liberty at this
stage.

However, on the other hand I would rather avoid having multiple agents on
the compute node performing various tasks in an un-coordinated way (well,
actually relying on neutron-server coordination).

QoS is an example, but what Miguel is doing for QoS applies, for instance,
to security groups and allowed address pairs processing. Even if probably
Mohammad has more in mind a "modular" agent that is able to talk to
different data planes using a well-defined driver interface,  a similar
framework could be used for "augmenting" the capabilities of an agent as
Miguel mentions.

I would probably start with something for enabling the L2 agent to process
"features" such as QoS and security groups, working on the OVS agent, and
then in a second step abstract a driver interface for communicating with
the data plane. But I honestly do not know if this will keep the work too
"OVS-centric" and therefore won't play well with the current efforts to put
linux bridge on par with OVS in Neutron. For those question we should seek
an answer from our glorious reference control plane lieutenant, and perhaps
also from Sean Collins, who's coordinating efforts around linux bridge
parity.

Salvatore

On 22 June 2015 at 16:30, Miguel Angel Ajo  wrote:

>
>
> In the context of Quality of Service, we need to extend the L2 agents
> (SR-IOV,
> OVS and LB), and we didn't want to simply hijack the processing loop of
> the agents,
> but take a moment, and put together a Modular L2 design
>
> https://review.openstack.org/#/c/189723/
>
> If you find it reasonable to do it in this context so it can be reused for
> neutron
> in general later, please join the reviews.
>
> I'm not sure if Irena was involved on previous Modular L2 Agent design
> sessions.
>
>
> Best regards,
> Miguel Ángel.
>
>
> Mohammad Banikazemi wrote:
>
>
>
> During the last couple of ML2 group meetings, the subject of Modular L2
> Agents has come up again and I was tasked to bring up the subject to the
> attention of the larger community.
> We are aware of the ongoing efforts to improve the L2 agent(s) and the
> patches which are currently under review and those that got merged
> recently. The question is whether the Neutron community thinks the effort
> started (and suspended) a while ago around creating a modular L2 agent is
> worth pursuing at all and if yes, whether this is a good cycle to get that
> work possibly restarted.
>
> Best,
>
> Mohammad
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>   Mohammad Banikazemi 
>  21 Jun 2015 18:54 via Postbox
> 
>
> During the last couple of ML2 group meetings, the subject of Modular L2
> Agents has come up again and I was tasked to bring up the subject to the
> attention of the larger community.
> We are aware of the ongoing efforts to improve the L2 agent(s) and the
> patches which are currently under review and those that got merged
> recently. The question is whether the Neutron community thinks the effort
> started (and suspended) a while ago around creating a modular L2 agent is
> worth pursuing at all and if yes, whether this is a good cycle to get that
> work possibly restarted.
>
> Best,
>
> Mohammad
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][api] Neutron micro-versioning update

2015-06-17 Thread Salvatore Orlando
As you are probably aware an api-wg guideline for microversioning is under
review [1]
Needless to say, neutron developers interested in this work should have a
look at [1] - if nothing else because we need to ensure we are aligned -
and influence the guideline were appropriate.

Experimental APIs are one item where Neutron is not already aligned with
the proposed guideline - and with the project already implementing
microversioning.
While it is known that nova chose to adopt experimental APIs only as a
temporary mechanism [2], the idea of experimental APIs got pretty much
slammed down unanimously in an Ironic meeting (in [3] it sounds like the
word 'experimental' really tickles the Ironic development team).
Therefore, Neutron needs to rethink the proposed API evolution strategy
without experimental APIs. Every new API introduced will be versioned.
While versioning still allows us to evolve the API as we wish, the drawback
is that we'll have to expect several backward incompatible changes while
new APIs stabilise after being introduced.

On the practical stuff matter, I am going to add soon a list of todo items
to spec [4] (which we'll probably amend anyway to reflect outcome the
discussion on [1]). If you're interested in cooperating in this effort,
please pick one item. If we achieve a decent number of "volunteers" we'll
try and set up a weekly meeting.

One aspect where general feedback would be welcome is whether the
microversioning work should be based on master or piggyback on the pecan
switch effort - therefore implementing versioning directly in the new
framework. The pecan switch is being implemented in a feature branch [5]

Thanks for your attention,
Salvatore

[1] https://review.openstack.org/#/c/187112/
[2]
http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/api-microversions.htm
[3]
http://eavesdrop.openstack.org/meetings/ironic/2015/ironic.2015-06-15-17.02.log.html
[4]
http://specs.openstack.org/openstack/neutron-specs/specs/liberty/microversioning.html
[5]
https://review.openstack.org/#/q/status:open+project:openstack/neutron+branch:feature/pecan,n,z
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Quota enforcement

2015-06-16 Thread Salvatore Orlando
Some more comments inline.

Salvatore

On 16 June 2015 at 19:00, Carl Baldwin  wrote:

> On Tue, Jun 16, 2015 at 12:33 AM, Kevin Benton  wrote:
> >>Do these kinds of test even make sense? And are they feasible at all? I
> >> doubt we have any framework for injecting anything in neutron code under
> >> test.
> >
> > I was thinking about this in the context of a lot of the fixes we have
> for
> > other concurrency issues with the database. There are several exception
> > handlers that aren't exercised in normal functional, tempest, and API
> tests
> > because they require a very specific order of events between workers.
> >
> > I wonder if we could write a small shim DB driver that wraps the python
> one
> > for use in tests that just makes a desired set of queries take a long
> time
> > or fail in particular ways? That wouldn't require changes to the neutron
> > code, but it might not give us the right granularity of control.
>
> Might be worth a look.
>

It's a solution for pretty much mocking out the DB interactions. This would
work for fault injection on most neutron-server scenarios, both for RESTful
and RPC interfaces, but we'll need something else to "mock" interactions
with the data plane  that are performed by agents. I think we already have
a mock for the AMQP bus on which we shall just install hooks for injecting
faults.


> >>Finally, please note I am using DB-level locks rather than non-locking
> >> algorithms for making reservations.
> >
> > I thought these were effectively broken in Galera clusters. Is that not
> > correct?
>
> As I understand it, if two writes to two different masters end up
> violating some db-level constraint then the operation will cause a
> failure regardless if there is a lock.
>


> Basically, on Galera, instead of waiting for the lock, each will
> proceed with the transaction.  Finally, on commit, a write
> certification will double check constraints with the rest of the
> cluster (with a write certification).  It is at this point where
> Galera will fail one of them as a deadlock for violating the
> constraint.  Hence the need to retry.  To me, non-locking just means
> that you embrace the fact that the lock won't work and you don't
> bother to apply it in the first place.
>

This is correct.

Db level locks are broken in galera. As Carl says, write sets are sent out
for certification after a transaction is committed.
So the write intent lock, or even primary key constraint violations cannot
be verified before committing the transaction.
As a result you incur a write set certification failure, which is notably
more expensive than an instance-level rollback, and manifests as a
DBDeadlock exception to the OpenStack service.

Retrying a transaction is also a way of embracing this behaviour... you
just accept the idea of having to reach to write set certifications.
Non-locking approaches instead aim at avoiding write set certifications.
The downside is that especially in high concurrency scenario, the operation
is retries many times, and this might become even more expensive than
dealing with the write set certification failure.

But zzzeek (Mike Bayer) is coming to our help; as a part of his DBFacade
work, we should be able to treat active/active cluster as active/passive
for writes, and active/active for reads. This means that the write set
certification issue just won't show up, and the benefits of active/active
clusters will still be attained for most operations (I don't think there's
any doubt that SELECT operations represent the majority of all DB
statements).


> If my understanding is incorrect, please set me straight.
>

You're already straight enough ;)


>
> > If you do go that route, I think you will have to contend with DBDeadlock
> > errors when we switch to the new SQL driver anyway. From what I've
> observed,
> > it seems that if someone is holding a lock on a table and you try to grab
> > it, pymsql immediately throws a deadlock exception.
>

> I'm not familiar with pymysql to know if this is true or not.  But,
> I'm sure that it is possible not to detect the lock at all on galera.
> Someone else will have to chime in to set me straight on the details.
>

DBDeadlocks without multiple workers also suggest we should look closely at
what eventlet is doing before placing the blame on pymysql. I don't think
that the switch to pymysql is changing the behaviour of the database
interface; I think it's changing the way in which neutron interacts to the
database thus unveiling concurrency issues that we did not spot before as
we were relying on a sort of implicit locking triggered by the fact that
some parts of Mysql-Python were implemented in C.


>
> Carl
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Re: [openstack-dev] [Neutron] Quota enforcement

2015-06-16 Thread Salvatore Orlando
On 16 June 2015 at 18:49, Carl Baldwin  wrote:

> On Thu, Jun 11, 2015 at 2:45 PM, Salvatore Orlando 
> wrote:
> > I have been then following a different approach. And a set of patches,
> > including a devref one [2], if up for review [3]. This hardly completes
> the
> > job: more work is required on the testing side, both as unit and
> functional
> > tests.
> >
> > As for the spec, since I honestly would like to spare myself the hassle
> of
> > rewriting it, I would kindly ask our glorious drivers team if they're ok
> > with me submitting a spec in the shorter format approved for Liberty
> without
> > going through the RFE process, as the spec is however in the Kilo
> backlog.
>
> It took me a second read through to realize that you're talking to me
> among the drivers team.  Personally, I'm okay with this and our
> currently documented policy seems to allow for this until Liberty-1.
>

Great!


>
> I just hope that this isn't an indication that we're requiring too
> much in this new RFE process and scaring potential filers away.  I'm
> trying to learn how to write good RFEs, so let me give it a shot:
>
>   Summary:  "Need robust quota enforcement in Neutron."
>
>   Further Information:  "Neutron can allow exceeding the quota in
> certain cases.  Some investigation revealed that quotas in Neutron are
> subject to a race where parallel requests can each check quota and
> find there is just enough left to fulfill its individual request.
> Each request proceeds to fulfillment with no more regard to the quota.
> When all of the requests are eventually fulfilled, we find that they
> have exceeded the quota."
>
> Given my current knowledge of the RFE process, that is what I would
> file as a bug in launchpad and tag it with 'rfe.'
>

The RFE process is fine and relatively simple. I was just luring somebody
into giving me the exact text to put in it!
Jokes apart, I was suggesting this because since it was a "backlog" spec,
it was already assumed that it was something we wanted to have for Neutron
and thus skip the RFE approval step.


> > For testing I wonder what strategy do you advice for implementing
> functional
> > tests. I could do some black-box testing and verifying quota limits are
> > correctly enforced. However, I would also like to go a bit white-box and
> > also verify that reservation entries are created and removed as
> appropriate
> > when a reservation is committed or cancelled.
> > Finally it would be awesome if I was able to run in the gate functional
> > tests on multi-worker servers, and inject delays or faults to verify the
> > systems behaves correctly when it comes to quota enforcement.
>
> Full black box testing would be impossible to achieve without multiple
> workers, right?  We've proposed adding multiple worker processes to
> the gate a couple of times if I recall including a recent one to .
>

Yeah but Neutron was not as stable with multiple workers, and we had to
revert it (I think I did the revert)


> Fixing the failures has not yet been seen as a priority.
>

I wonder if this is because developers are too busy bikeshedding or chasing
unicorns,  or because the issues we saw are mostly due to the way we run
tests in the gate and are not found by operators in real deployments
(another option if that operators are too afraid of neutron's
unpredictability and they do not even try turning on multiple workers)


> I agree that some whitebox testing should be added.  It may sound a
> bit double-entry to some but I don't mind, especially given the
> challenges around block box testing.  Maybe Assaf can chime in here
> and set us straight.
>

I want white-box testing. I think it's important. Unit tests to an extent
do this, but they don't test the whole functionality. On the other hand
black-bot testing tests the functionality, but it does not tell you whether
the system is actually behaving as you expect. If it's not, it means you
have a fault. And that fault will eventually emerge as a failure. So we
need this kind of testing. However, I need hooks in Neutron in order to
achieve this. Like a sqlalchemy event listener that informs me of completed
transactions, for instance. Or hooks to perform fault injection - like
adding a delay, or altering the return value of a function. It would be
good for me to know whether this is in the testing roadmap for Liberty.


>
> > Do these kinds of test even make sense? And are they feasible at all? I
> > doubt we have any framework for injecting anything in neutron code under
> > test.
>
> Dunno.


> > Finally, please note I am using DB-level locks rather than non-locking
> > algorithm

Re: [openstack-dev] [api][nova][ironic] Microversion API HTTP header

2015-06-16 Thread Salvatore Orlando
On 16 June 2015 at 14:38, Lucas Alvares Gomes  wrote:

> Hi
>
> >> So if our min_version is 2.1 and the max_version is 2.50. That means
> >> alternative implementations need implement all the 50 versions
> >> api...that sounds pain...
> >
> >
> > Yes, it's pain, but it's no different than someone who is following the
> > Amazon EC2 API, which cuts releases at a regular (sometimes every 2-3
> weeks)
> > clip.
> >
> > In Amazon-land, the releases are date-based, instead of
> > microversion/incrementing version-based, but the idea is essentially the
> > same.
> >
>
> Sorry I might be missing something. I don't think one thing justify
> the other, plus the problem seems to be the source of truth. I thought
> that the idea of big tent in OpenStack was to not having TC to "pick
> winners". E.g, If someone wants to have an alternative implementation
> of the Baremetal service they will always have to follow Ironic's API?
> That's unfair, cause they will always be behind and mostly likely
> won't weight much on the decisions of the API.
>

I agree and at the same I disagree with this statement.

A competing project in the Baremetal (or networking, or pop-corn as a
service) areas, can move into two directions:
1) Providing a different implementation for the same API that the
"incumbent" (Ironic in this case) provides.
2) Supply different paradigms, including a different user API, thus
presenting itself as a "new way" of doing Baremetal (and this is exactly
what Quantum did to nova-network).

Both cases are valid, I believe.
In the first case, the advantage is that operators could switch between the
various implementations without affecting their users (this does not mean
that the switch won't be painful for them of course). Also, users shouldn't
have to worry about what's implementing the service, as they always
interact with the same API.
However, it creates a problem regarding control of said API... the team
from the "incumbent" project, the new team, both teams, the API-WG, or
no-one?
The second case is super-painful for both operators and users (do you need
a refresh on the nova-network vs neutron saga? We're at the 5th series now,
and the end is not even in sight) However, it completely avoid the
governance problem arising from having APIs which are implemented by
multiple projects.

So, even I understand where Jay is coming from, and ideally I'd love to
have APIs associated with app catalog elements rather than projects, I
think there is not yet a model that would allow to achieve this when
multiple API implementations are present. So I also understand why the
headers have been implemented in the current way.



>
> As I mentioned in the other reply, I find it difficult to talk about
> alternative implementations while we do not decouple the API
> definition level from the implementation level. If we want alternative
> implementations to be a real competitor we need to have a sorta of
> program in OpenStack that will be responsible for delivering a
> reference API for each type of project (Baremetal, Compute, Identity,
> and so on...).
>

Indeed. If I understood what you wrote correctly, this is in-line with what
I stated in the previous paragraph.
Nevertheless, since afaict we do not have any competing APIs at the moment
(the nova-network API is part of the Nova API so we might be talking about
overlap there rather than competition), how crazy does it sound if we say
that for OpenStack Nova is the compute API and Ironic the Bare Metal API
and so on? Would that be an unacceptable power grab?


>
> > There is GREAT value to having an API mean ONE thing and ONE thing only.
> It
> > means that developers can code against something that isn't like
> quicksand
> > -- constantly changing meanings.
>
> +1, sure.
>
> Cheers,
> Lucas
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Microversioning work questions and kick-start

2015-06-14 Thread Salvatore Orlando
On 12 June 2015 at 16:58, Henry Gessau  wrote:

> On Thu, Jun 11, 2015, Salvatore Orlando  wrote:
> > Finally, I received queries from several community members that would be
> keen
> > on helping supporting this microversioning effort. I wonder if the PTL
> and the
> > API lieutenants would ok with agreeing to have a team of developers
> meeting
> > regularly, working towards implementing this feature, and report progress
> > and/or issues to the general Neutron meeting.
>
> Yes, I am ok with agreeing to form such a team. ;) With an effort this
> complex
> it makes sense to have tl;dr type summaries in the general meeting. This
> has
> worked well for large-effort features before, and when the work winds down
> the
> topic can fold back into the main meeting.
>

Thanks Henry!

So I would say that perhaps we could gather interest from developers -
either using this thread or another one - and once we have a critical mass
of, for instance, 5 developers, we will kick off the activities, book a
weekly slot for more or less regular meetings, set expectations, review
design, discuss implementation, and hopefully get this thing done.

Salvatore


>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Microversioning work questions and kick-start

2015-06-14 Thread Salvatore Orlando
On 12 June 2015 at 12:22, Sean Dague  wrote:

> On 06/11/2015 06:03 PM, Salvatore Orlando wrote:
> > As most of you already know, work is beginning to move forward on the
> > micro-versioned Neutron API, for which a specification is available at
> [1]
> >
> > From a practical perspective there is one non-negligible preliminary
> > issue that needs attention. then Neutron API URI prefix includes the
> > full version number - currently 2.0. For instance:
> >
> >  http://neutron_server:9696/v2.0/networks.json
> >
> > This clearly makes a microversioned approach a bit weird - if you have
> > to use, for instance, 2.0 as a URI prefix for API version 2.12.
> > On the one hand it might make sense to start the micro-versioned API as
> > a sort of clean slate, possibly using a version-agnostic URI prefix or
> > no prefix at all; also as pointed out by some community members it will
> > give a chance to validate this versioned API approach.
> > This will have however the drawback that both the unversioned,
> > extension-based so-called 2.0 API will keep living and evolving
> > side-by-side with the versioned API, and then switching to the versioned
> > API will not be transparent to clients.
> > It would be good to receive some opinions from the developer and user
> > community on the topic.
>
> It will definitely be challenging to have both evolving at once. The
> Nova team had a lot of pains in that happening in the 18 months of v3.0
> work. Once we got the microversion mechanism in place we hard froze
> v2.0. That being said we actually had 2 internal code bases, so our
> situation was a bit gorpier than yours.
>

Well, we'd have both "extensions" and "revisions" evolving at the same
time. The situation won't be nearly as difficult to handle as nova v3.0,
but it still be a hassle.
While I understand where people advocating for not freezing the current
extension-based mechanisms until microversioning is proved and tested, on
the other hand my concern is that this effort is already on the failure
road as it's pretty much an experimental alternative to the current way of
evolving the API for the foreseeable future.

Therefore I would say that the community might accept that if
microversioning is functionally complete and reliably working in version X
(where X is an openstack release), then automatically the "old" API will be
frozen in the same version, deprecated in X+1 and killed in X+2.



>
> On the version in the url: My expectation is at some point the future
> we'll privot out of having a version string in our URL entirely, but
> it's one of those things that can come later.
> The url root is mostly important from a service catalog perspective, in
> that what's there matches what the code returns in all it's internal
> links. Honestly, long term, it would be great if 1) we actually
> developer standards for naming in the service catalog 2) the API
> services stop having their API url in code, but instead reflect it back
> from the catalog. Which would fix one of the gorpiest parts of putting
> your API servers behind haproxy or ssl termination. I.e. I wouldn't be
> too concerned on that front, it looks a little funny, but won't really
> get in anyone's way.
>
> > Furthermore, another topic that has been brought up is whether plugins
> > should be allowed to control the version of the API server, like
> > specifying minimum and maximum version. My short answer is no, because
> > the plugin should implement the API, not controlling it. Also, the spec
> > provides a facility for plugins to disable features if they are unable
> > to support them.
> >
> > Finally, I received queries from several community members that would be
> > keen on helping supporting this microversioning effort. I wonder if the
> > PTL and the API lieutenants would ok with agreeing to have a team of
> > developers meeting regularly, working towards implementing this feature,
> > and report progress and/or issues to the general Neutron meeting.
> >
> > Salvatore
> >
> > [1] https://review.openstack.org/#/c/136760/
> >
> >
> >
> >
> >
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][Neutron][OVN] OVN-Neutron - TODO List

2015-06-14 Thread Salvatore Orlando
Gal,

thanks for this summery.
Some additional info inline.

Salvatore

On 12 June 2015 at 19:38, Gal Sagie  wrote:

> Hello All,
>
> I wanted to share some of our next working items and hopefully get more
> people on board with the project.
> I personally would mentor any new comer that wants to get familiar with
> the project and help
> with any of these items.
> You can also feel free to approach Russell Bryant (rbry...@redhat.com)
> which is heading the OVN-Openstack integration.
>
> We both are usually active on IRC at #openstack-neutron-ovn (freenode) ,
> you can drop us a visit if you have any questions.
>
> The Neutron sprint in Fort Collins [1] has a work item for OVN, hopefully
> some work can start
> there on some of these items (or others).
> Russell Bryant and myself unfortunately won't be there, but feel free to
> contact us online or in email.
>
> *1. Security Group Implementation*
> Currently security groups are not being configured to OVN, there is a
> document written
> about how to model security groups to OVN northbound ACL's. [2]
> I suspect getting this right is not going to be trivial, hopefully i might
> be able to also start tackling
> this item next week.
>

>From what I recall Miguel was very interested in helping out on this front.
Have you already reached out to him?


>
> *2. Provider Network support*
> Russell sent a design proposal to the ovs-dev mailing list [3], need to
> follow on that
> and implement in OVN
>

I think I have replied to that proposal with a few comments, perhaps you
might have a look at those.


>
> *3. Tempest configuration*
> Russell has a patch for that [4] which needs additional help to make it
> work.
>

That patch has merged now. So perhaps Russell does not need help anymore!

>
> *4. Unit Tests / Functional Tests *
> We want to start adding more testing to the project in all fronts
>
> *5. Integration with OVS-DPDK*
> OVS-DPDK has a ML2 mechanism driver [5] to enable userspace DPDK dataplane
> for OVS,
> we want to try and see how this can combine with OVN mechanism driver
> together. (one idea is to
> use hierarchical port binding for that)
> Need to design and test it and provide additional working items for this
> integration
>

I think this is a rare case where the OVN integration might leverage
additional mechanism drivers as AFAICT the DPDK driver mainly interacts
with VIF plugging (operating at the Neutron port bindings level), and does
not interfere with logical network resource processing.


>
> *6. L2 Gateway Integration*
> OVN supports L2 gateway translation between virtual and physical networks.
> We want to leverage the current L2-Gateway sub project in stack forge [6]
> and use it
> to enable configuration of L2 gateways in OVN.
> I have looked briefly at the project and it seems the API's are good, but
> currently the
> implementation relay on RPC and agent implementation (where we would like
> to
> configure it using OVSDB) , so this needs to be sorted and tested.
>

Last time I checked the progress of this project, they were focusing on ToR
VxLAN offload as a first use case. And, as far as I recall, this is what
networking-l2gw provides nowaday (Armando and Sukdev might have more info).
Nevertheless, the API is generic enough that in my opinion it might be
possible for OVN to leverage it. We shall implement a distinct l2gw service
plugin like [1]; as I have some familiarity with this kind of APIs, let me
know if I can be of any help.

[1]
http://git.openstack.org/cgit/openstack/networking-l2gw/tree/networking_l2gw/services/l2gateway/plugin.py


> Another issue is related to OVN it self which doesn't have L2 Gateway
> awareness
> in the Northbound DB (which is the DB that neutron configure) but only has
> the API
> in the Southbound DB
>

Yeah, this could be some sort of a problem... I don't think Neutron should
interact with SB DB. OVN architecture has not been conceived to work this
way.


>
> *7. QoS Support*
> We want to be able to support the new QoS API that is being implemented in
> Liberty [7]
> Need to see how we can leverage the work that will implement this for OVS
> in the
> reference implementation and what additions need to be made for OVN case.
>

Do you already have anything in mind?

>
>

> *8. L3 Implementation*
> L3 is not yet implemented in OVN, need to follow up on the design and add
> the L3 service plugin
> and implementation.
>

If I can be of any help on this front, I'd be glad to offer my assistance
(which may of no use for you, but that's another story!)
By the way, is there a reason for which native OVN DHCP/metadata access
support are not in this todo list?

>
>

> *9. VLAN Aware VM's*
> This is not directly related to OVN, but we need to see that OVN use case
> of configuring parent
> ports (for the use case of Containers inside a VM) is being addressed, and
> if the implementation
> is finished, to align the API for OVN as well.
>

I reckon the proposed API (master/child ports) or the alternative conce

Re: [openstack-dev] [Neutron] Issue with pymysql

2015-06-11 Thread Salvatore Orlando
It is however interesting that both "lock wait timeouts" and "missing
savepoint" errors occur in operations pertaining the same table -
securitygroups in this case.
I wonder if the switch to pymysl has not actually uncovered some other bug
in Neutron.

I have no opposition to a revert, but since this will affect most projects,
it's probably worth finding some time to investigate what is triggering
this failure when sqlalchemy is backed by pymysql before doing that.

Salvatore

On 12 June 2015 at 03:32, Eugene Nikanorov  wrote:

> Hi neutrons,
>
> I'd like to draw your attention to an issue discovered by rally gate job:
>
> http://logs.openstack.org/96/190796/4/check/gate-rally-dsvm-neutron-rally/7a18e43/logs/screen-q-svc.txt.gz?level=TRACE
>
> I don't have bandwidth to take a deep look at it, but first impression is
> that it is some issue with nested transaction support either on sqlalchemy
> or pymysql side.
> Also, besides errors with nested transactions, there are a lot of Lock
> wait timeouts.
>
> I think it makes sense to start with reverting the patch that moves to
> pymysql.
>
> Thanks,
> Eugene.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Microversioning work questions and kick-start

2015-06-11 Thread Salvatore Orlando
As most of you already know, work is beginning to move forward on the
micro-versioned Neutron API, for which a specification is available at [1]

>From a practical perspective there is one non-negligible preliminary issue
that needs attention. then Neutron API URI prefix includes the full version
number - currently 2.0. For instance:

 http://neutron_server:9696/v2.0/networks.json

This clearly makes a microversioned approach a bit weird - if you have to
use, for instance, 2.0 as a URI prefix for API version 2.12.
On the one hand it might make sense to start the micro-versioned API as a
sort of clean slate, possibly using a version-agnostic URI prefix or no
prefix at all; also as pointed out by some community members it will give a
chance to validate this versioned API approach.
This will have however the drawback that both the unversioned,
extension-based so-called 2.0 API will keep living and evolving
side-by-side with the versioned API, and then switching to the versioned
API will not be transparent to clients.
It would be good to receive some opinions from the developer and user
community on the topic.

Furthermore, another topic that has been brought up is whether plugins
should be allowed to control the version of the API server, like specifying
minimum and maximum version. My short answer is no, because the plugin
should implement the API, not controlling it. Also, the spec provides a
facility for plugins to disable features if they are unable to support them.

Finally, I received queries from several community members that would be
keen on helping supporting this microversioning effort. I wonder if the PTL
and the API lieutenants would ok with agreeing to have a team of developers
meeting regularly, working towards implementing this feature, and report
progress and/or issues to the general Neutron meeting.

Salvatore

[1] https://review.openstack.org/#/c/136760/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [nova] [oslo] [neutron][cross-project] Split Policy rules into two parts.

2015-06-11 Thread Salvatore Orlando
I am not able to say whether this works for Nova. Surely works for Neutron
- from a functional perspective at least.

I still don't know however whether this choice is the best way to proceed,
and perhaps you can help me understand better.

Role checks are always expressed through policy.json and can be enforced in
middleware. Does this mean that there is also a centralized policy.json, or
will we keep per-project policy files even for role checks?

Scope checks - ie: application-specific checks - can be enforced in any way
the application developers wish. They can use policy.json, be hardcoded or,
if they wish ask Pythia, the Oracle of Delphi. From an operator
perspective, this means that every project can enforce policies in a
different way. Is this going to be practical and maintainable? I can't
speak for operators, but I would like to understand a bit better what this
implies for them.

Salvatore







On 11 June 2015 at 17:47, Adam Young  wrote:

>  Sean had a really good point when he mentioned that the Developers know
> what need to be enforced, and I think this is why he suggested that the
> base policy implementation be in Python code, not the policy JSON DSL.
>
> The main thrust of the dynamic policy has been to get the role-to-api
> assignment more flexible.  However, there is another side to each policy
> rule; figureing out where the project (nee' tenant) id is in the request;
> is it part of the URL, part of the request body, or in the object returned
> from the database.  This part really should be handled by the developer
> working on the policy rule, and it should not be changed.
>
> So...what if we say that we split policy into two checks;  a role check,
> and a scope check.  Both checks must pass in order for the user to get
> access to the API.  The Scope check is not going to be dynamic;  once set,
> they will pretty much stay set.   It might be done using the policy.json,
> or done in code, but it will be separate from the role check.
>
>
> The Neutron policy checks for things like
>
> "shared": "field:networks:shared=True", "shared_firewalls":
> "field:firewalls:shared=True", "shared_firewall_policies":
> "field:firewall_policies:shared=True", "shared_subnetpools":
> "field:subnetpools:shared=True",
>
> Would be handled by the dev teams later policy check; anything that
> requires actually fetching the object from the database is postponed to
> this stage.
>
>
> The role check will come from the policy.json file.  This will allow the
> operator to fine tune how roles are handled.  Any thing else that can be
> explicitly checked based on the token will be fair game, but not API
> specific values;  no database fetch will be performed at this point.  The
> assumption is that this policy check could be generic enough to be
> performed in middleware, and might even be enforced based on the URL
> instead of the pseudo random namespacing we do now.
>
> Does this suggestion work for Nova?  I think it will make the overall
> policy much easier to maintain in the field.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Quota enforcement

2015-06-11 Thread Salvatore Orlando
Aloha!

As you know I pushed spec [1] during the Kilo lifecycle, but given the lazy
procrastinator that I am, I did not manage to complete in time for the
release.

This actually gave me a chance to realise that the spec that I pushed and
had approved did not make a lot of sense. Even worse, there were some false
claims especially when it comes to active-active DB clusters such as mysql
galera.

Thankfully nobody bothered to look at that - possibly because it renders
horribly in HTML - and that spared me a public shaming.

I have been then following a different approach. And a set of patches,
including a devref one [2], if up for review [3]. This hardly completes the
job: more work is required on the testing side, both as unit and functional
tests.

As for the spec, since I honestly would like to spare myself the hassle of
rewriting it, I would kindly ask our glorious drivers team if they're ok
with me submitting a spec in the shorter format approved for Liberty
without going through the RFE process, as the spec is however in the Kilo
backlog.

For testing I wonder what strategy do you advice for implementing
functional tests. I could do some black-box testing and verifying quota
limits are correctly enforced. However, I would also like to go a bit
white-box and also verify that reservation entries are created and removed
as appropriate when a reservation is committed or cancelled.
Finally it would be awesome if I was able to run in the gate functional
tests on multi-worker servers, and inject delays or faults to verify the
systems behaves correctly when it comes to quota enforcement.

Do these kinds of test even make sense? And are they feasible at all? I
doubt we have any framework for injecting anything in neutron code under
test.

Finally, please note I am using DB-level locks rather than non-locking
algorithms for making reservations. I can move to a non-locking algorithm,
Jay proposed one for nova for Kilo, and I can just implement that one, but
first I would like to be convinced with a decent proof (or sort of) that
the extra cost deriving from collision among workers is overshadowed by the
cost for having to handle a write-set certification failure and retry the
operation.

Please advice.

Regards,
Salvatore

[1]
http://specs.openstack.org/openstack/neutron-specs/specs/kilo-backlog/better-quotas.html
[2] https://review.openstack.org/#/c/190798/
[3]
https://review.openstack.org/#/q/project:openstack/neutron+branch:master+topic:bp/better-quotas,n,z
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [Nova] [Ironic] [Magnum] Microversion guideline in API-WG

2015-06-10 Thread Salvatore Orlando
As a further data point, Neutron has been trying to introduce
microversioning for a while, without success so far.

Given the sheer amount of backends the management layer integrates with,
and the constant need for the various subteams to "experiment" with the
API, the proposal [1] has probably some differences with the proposed
guideline.

Since the proposal is not yet approved nor implemented, perhaps it would be
worth looking at those differences, and get your advice on whether it might
be better if neutron adheres to the current guideline proposal or whether
it might be the case to include Neutron's requirements in the current
guideline proposal.

Salvatore

[1] https://review.openstack.org/#/c/136760/

On 10 June 2015 at 06:28, Xu, Hejie  wrote:

>  I updated the Microversion specification in API-WG
> https://review.openstack.org/187112
>
>
>
> The new patchset adds min/max version headers as Ironic used:
>
> X-Openstack-[PROJECT]-API-Minimum-Version
>
> X-Openstack-[PROJECT]-API-Maximum-Version
>
>
>
> And new response body for invalid version request.
>
>
>
>   {
>
> "versionFault": {
>
>   "max_version": "5.2",
>
>   "min_version": "2.1",
>
>   "description": "Version 5.3 is not supported by the API. \
>
>   Minimum is 2.1 and maximum is 5.2."
>
> }
>
>   }
>
>
>
> Which for backward compatible can add the existed fields in the response
> also. For example, the nova response is
>
>
>
>   {
>
> "versionFault": {
>
>   "max_version": "5.2",
>
>   "min_version": "2.1",
>
>   "description": "Version 5.3 is not supported by the API. \
>
>   Minimum is 2.1 and maximum is 5.2."
>
> },
>
> "computeFault": {
>
>   "message": "Version 5.3 is not supported by the API. \
>
>   Minimum is 2.1 and maximum is 5.2.",
>
>   "code": 406
>
> }
>
>   }
>
>
>
> The “computeFault” fields is included by current implementation, we can
> still add here, hope deprecated in the future.
>
>
>
> And the “experimental” flag in the X-OpenStack-Nova-API-Version header was
> deleted. It mentioned in the nova-spec but
>
> It didn’t implement. And I didn’t saw the same thing in the ironic. For
> current all the things satisfied all the cases. If we
>
> “experimental” flag still usefull, we can propose separately.
>
>
>
> Thanks
>
> Alex
>
>
>
> *From:* Devananda van der Veen [mailto:devananda@gmail.com]
> *Sent:* Monday, June 8, 2015 1:59 AM
> *To:* OpenStack Development Mailing List
> *Subject:* Re: [openstack-dev] [api] [Nova] [Ironic] [Magnum]
> Microversion guideline in API-WG
>
>
>
>
> On Jun 5, 2015 4:36 AM, "Sean Dague"  wrote:
> >
> > On 06/05/2015 01:28 AM, Adrian Otto wrote:
> > >
> > >> On Jun 4, 2015, at 11:03 AM, Devananda van der Veen
> > >> mailto:devananda@gmail.com>> wrote:
> > >>
> > >>
> > >> On Jun 4, 2015 12:00 AM, "Xu, Hejie"  > >> > wrote:
> > >> >
> > >> > Hi, guys,
> > >> >
> > >> > I’m working on adding Microversion into the API-WG’s guideline which
> > >> make sure we have consistent Microversion behavior in the API for
> user.
> > >> > The Nova and Ironic already have Microversion implementation, and as
> > >> I know Magnum https://review.openstack.org/#/c/184975/ is going to
> > >> implement Microversion also.
> > >> >
> > >> > Hope all the projects which support( or plan to) Microversion can
> > >> join the review of guideline.
> > >> >
> > >> > The Mircoversion specification(this almost copy from nova-specs):
> > >> https://review.openstack.org/#/c/187112
> > >> > And another guideline for when we should bump Mircoversion
> > >> https://review.openstack.org/#/c/187896/
> > >> >
> > >> > As I know, there already have a little different between Nova and
> > >> Ironic’s implementation. Ironic return min/max version when the
> requested
> > >> > version doesn’t support in server by http-headers. There isn’t such
> > >> thing in nova. But that is something for version negotiation we need
> > >> for nova also.
> > >> > Sean have pointed out we should use response body instead of http
> > >> headers, the body can includes error message. Really hope ironic team
> > >> can take a
> > >> > look at if you guys have compelling reason for using http headers.
> > >> >
> > >> > And if we think return body instead of http headers, we probably
> > >> need think about back-compatible also. Because Microversion itself
> > >> isn’t versioned.
> > >> > So I think we should keep those header for a while, does make sense?
> > >> >
> > >> > Hope we have good guideline for Microversion, because we only can
> > >> change Mircoversion itself by back-compatible way.
> > >>
> > >> Ironic returns the min/max/current API version in the http headers for
> > >> every request.
> > >>
> > >> Why would it return this information in a header on success and in the
> > >> body on failure? (How would this inconsistency benefit users?)
> > >>
> > >> To be clear, I'm not opposed to *also* having a useful error message
> > >> in th

Re: [openstack-dev] [Neutron] API Extensions - Namespace URLs

2015-06-09 Thread Salvatore Orlando
Jay is pretty much right.

In Neutron's case it is even more trivial. Somebody copied the extension
manager from Nova, and a sort of extension interface with this namespace.
And every neutron developer, including me felt compelled to filling that up
with something that resembled an XML namespace URI (which often resolve to
nowhere anyway).

I think a patch for blanking out those namespace is a great low hanging
fruit for new contributors.
But on the other hand I'm pretty sure Kevin is wiping them out as a part of
the Pecan refactor.

Salvatore

On 9 June 2015 at 20:33, Kevin Benton  wrote:

> I heard rumors that Oracle was going to introduce XML-as-a-service to
> OpenStack to make it enterprise-grade. If that's the case, we'll be ahead
> of everyone with our namespaces.
> On Jun 9, 2015 12:04 PM, "Brandon Logan" 
> wrote:
>
>> I believe XML support got removed from the API last cycle.
>> 
>> From: Jay Pipes 
>> Sent: Tuesday, June 9, 2015 1:08 PM
>> To: openstack-dev@lists.openstack.org
>> Subject: Re: [openstack-dev] [Neutron] API Extensions - Namespace URLs
>>
>> On 06/08/2015 05:10 PM, Sean M. Collins wrote:
>> > Hi,
>> >
>> > Within each API extension in the neutron tree, there is a method:
>> >
>> >  def get_namespace(cls):
>> >
>> > Which returns a string, containing a URL.
>>
>> 
>>
>> > I believe that they all 404.
>> >
>> > A dumb question to start, then progressively smarter questions:
>> >
>> > * What is the purpose of the URLs?
>>
>> They are the sad detritus left from XML support.
>>
>> > * Should the URL point to documentation?
>>
>> Perhaps.
>>
>> > * What shall we do about the actual URLs 404'ing?
>>
>> Honestly, I'd prefer the namespaces just be removed, but I'm not sure
>> what Neutron's position is about XML and the REST API...
>>
>> Best,
>> -jay
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - L3 scope-aware security groups

2015-06-08 Thread Salvatore Orlando
Kevin,

On 8 June 2015 at 23:52, Kevin Benton  wrote:

> There is a bug in security groups here:
> https://bugs.launchpad.net/neutron/+bug/1359523
>
> In the example scenario, it's caused by conntrack zones not being
> isolated. But it also applies to the following scenario that can't be
> solved by zones:
>
> create two networks with same 10.0.0.0/24
> create port1 in SG1 on net1 with IP 10.0.0.1
> create port2 in SG1 on net2 with IP 10.0.0.2
> create port3 in SG2 on net1 with IP 10.0.0.2
> create port4 in SG2 on net2 with IP 10.0.0.1
>

> port1 can communicate with port3 because of the allow rule for port2's IP
> port2 can communicate with port4 because of the allow rule for port1's IP
>

So this would be a scenario when bug 1359523 hits even with conntrack zone
separation, with the subtle, and terrible difference that there is a way to
enable cross-network plugging? For instance to reach port1 on net1, all I
have to do is create a network with a CIDR with some overlap with net1's,
and then wait until a VM is created with an IP that exists also on net1 -
and then jackpot, that VM will basically have access to all of net1's
instances?

The scenario you're describing is quite concerning from a security
perspective. Shouldn't there be L2 isolation to prevent something like this?

The solution will require the security groups processing code to understand
> that a member of a security group is not actually reachable by another
> member and skip the allow rule for that member.
>

The paragraph above is a bit obscure to me.


>
> With the current state of things, it will take a tone of kludgy code to
> check for routers and router interfaces to see if two IPs can communicate
> without NAT. However, if we end up with the concept of address-scopes, it
> just becomes a simple address scope comparison.
>

This is fine, but I wonder how's that related to what you described
earlier. Is the vulnerability triggered by the fact that both networks can
be attached to the same router? In that case I think that if the l3 mgmt
code works as expected it would reject adding an interface for a subnet
with an overlap with another already attached subnet, thus implementing an
implicit address scope of 0.0.0.0/0 (for v4).


>
> Implement address scopes.
>

Sure, my master.

>
>
> Cheers!
> --
> Kevin Benton
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] L3 agent rescheduling issue

2015-06-07 Thread Salvatore Orlando
On 5 June 2015 at 01:29, Itsuro ODA  wrote:

> Hi,
>
> > After trying to reproduce this, I'm suspecting that the issue is actually
> > on the server side from failing to drain the agent report state queue in
> > time.
>
> I have seen before.
> I thought the senario at that time as follows.
> * a lot of create/update resource API issued
> * "rpc_conn_pool_size" pool exhausted for sending notify and blocked
>   farther sending side of RPC.
> * "rpc_thread_pool_size" pool exhausted by waiting "rpc_conn_pool_size"
>   pool for replying RPC.
> * receiving state_report is blocked because "rpc_thread_pool_size" pool
>   exhausted.
>
>
I think this could be a good explanation couldn't it?
Kevin proved that the periodic tasks are not mutually exclusive and that
long process times for sync_routers are not an issue.
However, he correctly suspected a server-side involvement, which could
actually be a lot of requests saturating the RPC pool.

On the other hand, how could we use this theory to explain why this issue
tend to occur when the agent is restarted?
Also, Eugene, what do you mean by stating that the issue could be in
agent's "fairness"?

Salvatore



> Thanks
> Itsuro Oda
>
> On Thu, 4 Jun 2015 14:20:33 -0700
> Kevin Benton  wrote:
>
> > After trying to reproduce this, I'm suspecting that the issue is actually
> > on the server side from failing to drain the agent report state queue in
> > time.
> >
> > I set the report_interval to 1 second on the agent and added a logging
> > statement and I see a report every 1 second even when sync_routers is
> > taking a really long time.
> >
> > On Thu, Jun 4, 2015 at 11:52 AM, Carl Baldwin 
> wrote:
> >
> > > Ann,
> > >
> > > Thanks for bringing this up.  It has been on the shelf for a while now.
> > >
> > > Carl
> > >
> > > On Thu, Jun 4, 2015 at 8:54 AM, Salvatore Orlando  >
> > > wrote:
> > > > One reason for not sending the heartbeat from a separate greenthread
> > > could
> > > > be that the agent is already doing it [1].
> > > > The current proposed patch addresses the issue blindly - that is to
> say
> > > > before declaring an agent dead let's wait for some more time because
> it
> > > > could be stuck doing stuff. In that case I would probably make the
> > > > multiplier (currently 2x) configurable.
> > > >
> > > > The reason for which state report does not occur is probably that
> both it
> > > > and the resync procedure are periodic tasks. If I got it right
> they're
> > > both
> > > > executed as eventlet greenthreads but one at a time. Perhaps then
> adding
> > > an
> > > > initial delay to the full sync task might ensure the first thing an
> agent
> > > > does when it comes up is sending a heartbeat to the server?
> > > >
> > > > On the other hand, while doing the initial full resync, is the  agent
> > > able
> > > > to process updates? If not perhaps it makes sense to have it down
> until
> > > it
> > > > finishes synchronisation.
> > >
> > > Yes, it can!  The agent prioritizes updates from RPC over full resync
> > > activities.
> > >
> > > I wonder if the agent should check how long it has been since its last
> > > state report each time it finishes processing an update for a router.
> > > It normally doesn't take very long (relatively) to process an update
> > > to a single router.
> > >
> > > I still would like to know why the thread to report state is being
> > > starved.  Anyone have any insight on this?  I thought that with all
> > > the system calls, the greenthreads would yield often.  There must be
> > > something I don't understand about it.
> > >
> > > Carl
> > >
> > >
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
> >
> >
> > --
> > Kevin Benton
>
> --
> Itsuro ODA 
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] L3 agent rescheduling issue

2015-06-04 Thread Salvatore Orlando
One reason for not sending the heartbeat from a separate greenthread could
be that the agent is already doing it [1].
The current proposed patch addresses the issue blindly - that is to say
before declaring an agent dead let's wait for some more time because it
could be stuck doing stuff. In that case I would probably make the
multiplier (currently 2x) configurable.

The reason for which state report does not occur is probably that both it
and the resync procedure are periodic tasks. If I got it right they're both
executed as eventlet greenthreads but one at a time. Perhaps then adding an
initial delay to the full sync task might ensure the first thing an agent
does when it comes up is sending a heartbeat to the server?

On the other hand, while doing the initial full resync, is the  agent able
to process updates? If not perhaps it makes sense to have it down until it
finishes synchronisation.

Salvatore

[1]
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/agent/l3/agent.py#n587

On 4 June 2015 at 16:16, Kevin Benton  wrote:

> Why don't we put the agent heartbeat into a separate greenthread on the
> agent so it continues to send updates even when it's busy processing
> changes?
> On Jun 4, 2015 2:56 AM, "Anna Kamyshnikova" 
> wrote:
>
>> Hi, neutrons!
>>
>> Some time ago I discovered a bug for l3 agent rescheduling [1]. When
>> there are a lot of resources and agent_down_time is not big enough
>> neutron-server starts marking l3 agents as dead. The same issue has been
>> discovered and fixed for DHCP-agents. I proposed a change similar to those
>> that were done for DHCP-agents. [2]
>>
>> There is no unified opinion on this bug and proposed change, so I want to
>> ask developers whether it worth to continue work on this patch or not.
>>
>> [1] - https://bugs.launchpad.net/neutron/+bug/1440761
>> [2] - https://review.openstack.org/171592
>>
>> --
>> Regards,
>> Ann Kamyshnikova
>> Mirantis, Inc
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Mechanism drivers and Neutron server forking?

2015-06-02 Thread Salvatore Orlando
I'm not sure if you can test this behaviour on your own because it requires
the VMware plugin and the eventlet handling of backend response.

But the issue was manifesting and had to be fixed with this mega-hack [1].
The issue was not about several workers executing the same code - the
loopingcall was always started on a single thread. The issue I witnessed
was that the other API workers just hang.

There's probably something we need to understand about how eventlet can
work safely with a os.fork (I just think they're not really made to work
together!).
Regardless, I did not spent too much time on it, because I thought that the
multiple workers code might have been rewritten anyway by the pecan switch
activities you're doing.

Salvatore


[1] https://review.openstack.org/#/c/180145/

On 3 June 2015 at 02:20, Kevin Benton  wrote:

> Sorry about the long delay.
>
> >Even the LOG.error("KEVIN PID=%s network response: %s" % (os.getpid(),
> r.text)) line?  Surely the server would have forked before that line was
> executed - so what could prevent it from executing once in each forked
> process, and hence generating multiple logs?
>
> Yes, just once. I wasn't able to reproduce the behavior you ran into.
> Maybe eventlet has some protection for this? Can you provide small sample
> code for the logging driver that does reproduce the issue?
>
> On Wed, May 13, 2015 at 5:19 AM, Neil Jerram 
> wrote:
>
>> Hi Kevin,
>>
>> Thanks for your response...
>>
>> On 08/05/15 08:43, Kevin Benton wrote:
>>
>>> I'm not sure I understand the behavior you are seeing. When your
>>> mechanism driver gets initialized and kicks off processing, all of that
>>> should be happening in the parent PID. I don't know why your child
>>> processes start executing code that wasn't invoked. Can you provide a
>>> pointer to the code or give a sample that reproduces the issue?
>>>
>>
>> https://github.com/Metaswitch/calico/tree/master/calico/openstack
>>
>> Basically, our driver's initialize method immediately kicks off a green
>> thread to audit what is now in the Neutron DB, and to ensure that the other
>> Calico components are consistent with that.
>>
>>  I modified the linuxbridge mech driver to try to reproduce it:
>>> http://paste.openstack.org/show/216859/
>>>
>>> In the output, I never received any of the init code output I added more
>>> than once, including the function spawned using eventlet.
>>>
>>
>> Interesting.  Even the LOG.error("KEVIN PID=%s network response: %s" %
>> (os.getpid(), r.text)) line?  Surely the server would have forked before
>> that line was executed - so what could prevent it from executing once in
>> each forked process, and hence generating multiple logs?
>>
>> Thanks,
>> Neil
>>
>>  The only time I ever saw anything executed by a child process was actual
>>> API requests (e.g. the create_port method).
>>>
>>
>>
>>
>>  On Thu, May 7, 2015 at 6:08 AM, Neil Jerram >> > wrote:
>>>
>>> Is there a design for how ML2 mechanism drivers are supposed to cope
>>> with the Neutron server forking?
>>>
>>> What I'm currently seeing, with api_workers = 2, is:
>>>
>>> - my mechanism driver gets instantiated and initialized, and
>>> immediately kicks off some processing that involves communicating
>>> over the network
>>>
>>> - the Neutron server process then forks into multiple copies
>>>
>>> - multiple copies of my driver's network processing then continue,
>>> and interfere badly with each other :-)
>>>
>>> I think what I should do is:
>>>
>>> - wait until any forking has happened
>>>
>>> - then decide (somehow) which mechanism driver is going to kick off
>>> that processing, and do that.
>>>
>>> But how can a mechanism driver know when the Neutron server forking
>>> has happened?
>>>
>>> Thanks,
>>>  Neil
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> <
>>> http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>>
>>> --
>>> Kevin Benton
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Kevin Benton
>
> 

Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-02 Thread Salvatore Orlando
On 3 June 2015 at 07:12, John Griffith  wrote:

>
>
> On Tue, Jun 2, 2015 at 7:19 PM, Ian Wienand  wrote:
>
>> On 06/03/2015 07:24 AM, Boris Pavlovic wrote:
>>
>>> Really it's hard to find cores that understand whole project, but
>>> it's quite simple to find people that can maintain subsystems of
>>> project.
>>>
>>
>>   We are made wise not by the recollection of our past, but by the
>>   responsibility for our future.
>>- George Bernard Shaw
>>
>> Less authorities, mini-kingdoms and
>> turing-complete-rule-based-gerrit-subtree-git-commit-enforcement; more
>> empowerment of responsible developers and building trust.
>>
>> -i
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> ​All of the debate about the technical feasibility, additional repos
> aside, the one question I always raise when topics like this come up is
> "how does that really solve the problem".  In other words, there's still a
> finite number of folks that dedicate the time to be "subject matter
> experts" and do the reviews.
>
> Maybe this will help, I don't know.  But I have the same argument as I
> made in my spec to remove drivers from Cinder altogether, creating "another
> repo" and moving things around just creates more overhead and does little
> to address the lack of review resources.
>

In the neutron project we do not have yet enough data points to assess
impact of driver/plugin split on review turnaround. On the one hand it
seems that there is no statistically significant improvement in review
times for the "core" part, but on the other hand average review times for
plugin/driver code have improved a lot. So I reckon that there's been a
clear advantage on this front. There is always a flip of the coin, of
course: plugin maintainers have to do extra work to chase changes in
openstack/neutron.

However, this is a bit out of scope for this thread. I'd say that splitting
out a project in several repositories is an option, but not always the
right one. In the case of neutron plugins and drivers, it made sense
because there is a stable-ish interface between the core system and the
plugin, and because there's usually little overlap of responsibilities.


> I understand you're not proposing new repos Boris, although it was
> mentioned in this thread.
>
> I do think that we could probably try and do something like growing the
> Lieutenant model that the Neutron team is hammering out.  Not sure... but
> seems like a good start; again assuming there are enough
> qualified/interested Lieutenants.  I'm not sure, but that's kind of how I
> interpreted your proposal but one additional step of ACL's; is that
> accurate?
>

While I cannot answer for Boris, my opinion is that the lieutenant system
actually tries to provide a "social" solution to the problem, where as ACLs
are a technical solution. I personally think that the belief that there's
always a tool to fix any problem is a giant unicorn - as Robert put it
there's no technical solution to a social problem. A technical solution
would probably end up bringing more process, more bureaucracy, and
therefore more annoyance... but I'm digressing.

In my opinion the lieutenant system is an attempt to build networks of
trusted and responsible developers who share interest (more or less vested)
and knowledge on a specific subsystem of a project. If implemented
correctly, it will ensure those networks are small enough so that trust can
be achieved in a simple way.
I'd rather rely on trust and common sense than on a set of ACLs that
probably at some point will get in the way and be more a hindrance than a
help.

Salvatore


>
> Thanks,
> John​
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-02 Thread Salvatore Orlando
On 2 June 2015 at 23:59, Ian Cordasco  wrote:

>
>
> On 6/2/15, 16:24, "Boris Pavlovic"  wrote:
>
> >Hi stackers,
> >
> >
> >Issue
> >---
> >
> >
> >Projects are becoming bigger and bigger overtime.
> >More and more people would like to contribute code and usually core
> >reviewers
> >team can't scale enough. It's very hard to find people that understand
> >full project and have enough time to do code reviews. As a result team is
> >very small under heavy load and many maintainers just get burned out.
> >
> >
> >We have to solve this issue to move forward.
> >
> >
> >
> >
> >Idea
> >--
> >
> >
> >Let's introduce subsystems cores.
> >
> >
> >Really it's hard to find cores that understand whole project, but it's
> >quite simple to find people that can maintain subsystems of project.
> >
> >
> >
> >
> >How To
> >---
> >
> >
> >Gerrit is not so simple as it looks and it has really neat features ;)
> >
> >
> >For example we can write own rules about who can put +2 and merge patch
> >based on changes files.
> >
> >
> >We can make special "subdirectory core" ACL group.
> >People from such ACL group will be able to merge changes that touch only
> >files from some specific subdirs.
> >
> >
> >As a result with proper organization of directories in project we can
> >scale up review process without losing quality.
> >
> >
> >
> >
> >Thoughts?
> >
> >
> >
> >
> >Best regards,
> >Boris Pavlovic
>
> I like this very much. I recall there was a session at the summit about
> this that Thierry and Kyle led.


Indeed, and Kyle has already transformed that into facts [1]


> If I recall correctly, the discussion
> mentioned that it wasn't (at this point in time) possible to use gerrit
> the way you describe it, but perhaps people were mistaken?
>

I recall that too, and I also recall fungi stating the same thing back in
Paris.
Gerrit doesn't really have a concept of subsystems, as far as I can
understand; in theory gerrit could be changed to support this, but that's
another discussion.
The networking community is currently adopting multiple repositories to
this aim. This has worked very well for 3rd party plugins, and quite well
for advanced services.
For the 'neutron' proper project, which is however large enough to identify
multiple subsystems in it, the lieutenant mode described in [1] will be
enforced with a bit of common sense - from what I gather. If you're a core
for subsystem X, nominated by its lieutenant, you're not supposed to +/-2
patches that only marginally affect your subsystem or do not affect it at
all.


>
> If we can do this exactly as you describe it, that would be awesome.

If
> there's a problem in limiting people to what files they can approve
> changes for, then an alteration might be that those people get +2 but not
> +W. This provides a signal to whomever has +W that the review is very much
> ready to be merged. Does that sound fair?
>

neutron-specs adopts this approach (all cores can +2 but only a handful can
+A).
I think it works, in the assumption of a lieutenant systems, but for
projects with a large patch turnaround might constitute a bottleneck,
especially when there are gate-breaking issues that need to be approved
ASAP.
Generally speaking, I believe having 2 ties of cores (those with +A rights
and those without) is an experiment worth doing. I don't think it creates
an "elite" among developers; on the other hand, it gives SMEs a chance to
have a greater impact.



> Cheers,
> Ian
>
>
Salvatore

[1]
http://git.openstack.org/cgit/openstack/neutron/tree/doc/source/policies/core-reviewers.rst


> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Interconnecting projects

2015-06-02 Thread Salvatore Orlando
I suspect a "BaaS" (Bridge-as-a-service) proposal is lurking in this thread.

While the idea of yet-another-aas is probably not desirable at this time,
it might be worth trying and understand - from an exclusively logical
perspective (ie: the API consumer point of view) - what would be the
difference between having a single logical network shared across a number
of tenants, and a group of distinct networks interconnected by bridge ports.

I've tried in the past to look at "unique" use cases for a network bridge
feature; it might seem important to enforce that all the traffic between
two network goes through a predefined channel where security and traffic
shaping policies might be applied. On the other hand, I believe the same
result can be achieved - in the logical model - with features such as
security groups. This unless the Neutron API consumer explicitly wants to
describe a topology where all the traffic is forced to flow through a
specific logical appliance, but then we'll descend in the NFV/SFC/etc area.

Another thing to keep in mind is that routers can be used to this aim, but
- as Anik correctly noted - this is an admin-only feature at the moment.
Allowing router owners to interconnect other tenants' networks, leveraging
concepts such as keystone groups, is something that should be a natural
evolution of the RBAC work.
Still, this will leave us with a L3 interconnection, and not a direct L2
network-network connection.

Salvatore

On 2 June 2015 at 18:58, Fawad Khaliq  wrote:

> Great!
> A correction here: RBAC proposal does address some of the use cases on
> interconnecting tenants.
>
> Fawad Khaliq
>
>
> On Tue, Jun 2, 2015 at 9:41 PM, Anik  wrote:
>
>> That's exactly what I was asking for. Thanks Fawad.
>>
>> Regards,
>> Anik
>> 201-245-1569
>>
>>   --
>>  *From:* Fawad Khaliq 
>> *To:* OpenStack Development Mailing List (not for usage questions) <
>> openstack-dev@lists.openstack.org>
>> *Cc:* Anik 
>> *Sent:* Tuesday, June 2, 2015 9:29 AM
>> *Subject:* Re: [openstack-dev] Interconnecting projects
>>
>>
>> On Tue, Jun 2, 2015 at 9:14 PM, Assaf Muller  wrote:
>>
>> Check out:
>>
>> http://specs.openstack.org/openstack/neutron-specs/specs/liberty/rbac-networks.html
>>
>> If I understand correctly, what Anik is probably asking for is way to
>> connect two OpenStack projects together from a network point of view, where
>> a private network in Project1 can be connected to a Router in  Project2.
>> AFAIK, I don't think we are planning to expose such model in RBAC where a
>> tenant (non-admin) has a way control who can see/connect-to his/her
>> resources.
>>
>> @Anik, please correct me if I am wrong.
>>
>>
>>
>> Kevin is trying to solve exactly this problem. We're really hoping to
>> land it in
>> time for Liberty.
>>
>> - Original Message -
>> > Hi,
>> >
>> > Trying to understand if somebody has come across the following scenario:
>> >
>> > I have a two projects: Project 1 and Project 2
>> >
>> > I have a neutron private network in Project 1, that I want to connect
>> that
>> > private network to a neutron port in Project 2.
>> >
>> > This does not seem to be possible without using admin credentials. I am
>> not
>> > talking about a shared provider network here.
>> >
>> > It seems that the problem lies in the fact that there is no data model
>> today
>> > that lets one Project have knowledge about any other Project inside the
>> same
>> > OpenStack region.
>> >
>> > Any pointers there will be helpful.
>> > Regards,
>> > Anik
>> > 201-245-1569
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - dnsmasq 'dhcp-authoritative' option broke multiple DHCP servers

2015-05-26 Thread Salvatore Orlando
>From the bug Kevin reported it seems multiple dhcp agents per network have
been completely broken by the fix for bug #1345947, so a revert of patch
[1] (and stable backports) should probably be the first thing to do - if
nothing else because the original bug has not nearly the same level of
severity of the one it introduced.
Before doing this however, I am wondering why the various instances of
dnsmasq end up returning NAKs. I expect all instances to have the same
hosts file, so they should be able to respond to DHCPDISCOVER/DHCPREQUEST
correctly. Is the dnsmasq log telling us exactly why the authoritative
setting is preventing us from doing so? (this is more of a curiosity in my
side)

[1] https://review.openstack.org/#/c/152080/



On 26 May 2015 at 06:57, Ihar Hrachyshka  wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> On 05/26/2015 04:35 AM, Kevin Benton wrote:
> > Hi,
> >
> > A recent change[1] to pass '--dhcp-authoritative' to dnsmasq has
> > caused DHCPNAK messages when multiple agents are scheduled to a
> > network [2].
> >
> > This was back-ported to Icehouse and Juno so we need a fix that is
> > compatible with both of them.
> >
> > I have two fixes for this so far and a third alternative if we
> > don't like those.
> >
> > The first is hacky, but it's only a few-line change.[3] It adds an
> > iptables rule that just stops the DHCPNAKs from making it to the
> > client. This is clean to back-port but it doesn't protect clients
> > that have filtering disabled (e.g. bare metal).
> >
> > The second persists the DHCP leases to a database.[4] The downside
> > to this was always that being rescheduled to another agent would
> > mean no entries in the lease file. This approach adds a work-around
> > to generate an initial fake lease file based on all of the ports in
> > the network.
> >
> > A third approach that I don't have a patch pushed for yet is very
> > similar to the second. When dnsmasq is in the leasefile-ro mode, it
> > will call the script passed to --dhcp-script to get a list of
> > leases to start with. This script would be built with the same
> > logic as the second one. The only difference between the second
> > approach is that dnsmasq wouldn't persist leases to a database.
> >
>
> Actually, that approach was initially taken for bug 1345947, but then
> the patch was abandoned to be replaced with a simpler
> - --dhcp-authoritative approach that ended up with unexpected NAKs for
> multi agent setup.
>
> See: https://review.openstack.org/#/c/108272/12
>
> Maybe we actually want to restore the work and merge it after
> conflicts are resolved and --dhcp-authoritative option is killed; the
> patch was almost merged when --dhcp-authoritative suggestion emerged,
> so most of nitpicking work should be complete now (though at the same
> time, I totally trust our community to find another pile of nits to
> work on for the next few weeks!)
>

That was my thought as well.
However, we should check whether that patch is ok to backport. For instance
I see what it appears to be adding a script:

[2]
https://review.openstack.org/#/c/108272/12/bin/neutron-dhcp-agent-dnsmasq-lease-init


>
> ===
>
> Speaking of regression testing... Are full stack tests already
> powerful enough for us to invoke multiple DHCP agents and test the
> scenario?
>
> Ihar
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v2
>
> iQEcBAEBCAAGBQJVZHvHAAoJEC5aWaUY1u57vukIAJLPpQ9O236NYtOaRTzkL7g8
> Io1DmF6jyhJYFqfzoFcrFVbNmM0EsNtvMgZIhI8oYINkkoBYMJPoS2a8FvVUpZHw
> u/fmdvdbZgJwy4BCAEF0t+R1t1XLo6eTcPp8f3jABzExWyrLoKEbHJ0aWb5xwJ3u
> V74HXxo/PVifrNfxsQPn57ZxqgBvl4GSQAFQKE4FX/H81HWRWRuB5a9aC+hkYC9w
> 7FqXpf+IFCaS7tYdTSqJUa2/bKs268RQGoVqAYEtmVV5pA3OiMsy459rdLcHqqxS
> 67lryFh1DTMwI77LjDEanXzWIdMhb3t0YZw7ewpBBLl6P/Lh7xobIOGX2GeOyJ0=
> =xivW
> -END PGP SIGNATURE-
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Stepping down from Neutron core team

2015-05-21 Thread Salvatore Orlando
After putting the whole OpenStack networking contributors community through
almost 8 cycles of pedant comments and annoying "what if" questions, it is
probably time for me to relieve neutron contributors from this burden.

It has been a pleasure for me serving the Neutron community (or Quantum as
it was called at the time), and now it feel right - and probably overdue -
to relinquish my position as a core team member in a spirit of rotation and
alternation between contributors.

Note: Before you uncork your champagne bottles, please be aware that I will
stay in the Neutron community as a contributors and I might still end up
reviewing patches.

Thanks for being so understanding with my pedant remarks,
Salvatore
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [new][cognitive] Announcing Cognitive - a project to deliver Machine Learning as a Service for OpenStack

2015-05-14 Thread Salvatore Orlando
On 15 May 2015 at 00:19, Debojyoti Dutta  wrote:

> Hi!
>
> It is a great pleasure to announce the development of a new project called
> Cognitive.  Cognitive provides Machine Learning [1] as a Service that
> enables operators to offer next generation data science based services on
> top of their OpenStack Clouds.
>

I was indeed wondering when "Machine Learning as a Service" would come up...


> This project will begin as a StackForge project baed upon an empty
> cookiecutter [2] repo.  The repos to work in are:
> Server: https://github.com/stackforge/cognitive
> Client: https://github.com/stackforge/python-cognitiveclient
>
> Please join us via iRC on #openstack-cognitive on freenode.
>
> We will be holding a doodle poll to select times for our first meeting the
> week after summit.  This doodle poll will close May 24th and meeting times
> will be announced on the mailing list at that time.  At our first IRC
> meeting, we will draft additional core team members. We would like to
> invite interested individuals to join this exciting new development effort!
>

>From my little experience, "drafting" core members before even actually
having a code base has drawbacks. Also, it seems the initial starting team
is already large enough for ensuring support for 1 or 2 release cycle.


>
>

> Please commit your schedule in the doodle poll here:
> http://doodle.com/drrka5tgbwpbfbxy
>
> Initial core team: Steven Dake, Aparupa Das Gupa, Debo~ Dutta, Johnu
> George,  Kyle Mestery, Sarvesh Ranjan, Ralf Rantzau, Komei Shimamura, Marc
> Solanas, Manoj Sharma, Yathi Udupi, Kai Zhang.
>

Hey! What's the Neutron PTL doing there? Sorry we need his reviews we can't
loan it to you!


>
> A little bit about Cognitive:
> Data driven applications on cloud infrastructure increasingly rely on
> Machine Learning. Most data driven applications today use Machine Learning
> (ML). This often requires application developers and data scientists to
> write their own machine learning stack or deploy other packages to do any
> kind of data science based applications. Data scientists also need to have
> an easy way to rapidly experiment with data without having to write basic
> infrastructure for data manipulations. Cognitive is a Machine Learning
> service on top of OpenStack and provides machine learning based services to
> tenants (API, workbench, compute service).
>

I wonder what kind of services you would offer; also you could have shared
something about the architecture of this service. Is it providing a full
machine learning stack, or just facilitating the use of existing one?

But I see that there's a link to a wiki page below. This might have all the
answers.


>
>
> For information about blueprints check out:
> https://blueprints.launchpad.net/cognitive
> https://blueprints.launchpad.net/python-cognitiveclient
>
> For more details, check out our Wiki:
> https://wiki.openstack.org/wiki/Cognitive
>

... and unfortunately the wiki is empty ;)


>
> Please join the awesome Cognitive team in designing a world class Machine
> Learning as a Service solution.
>
> We look forward to seeing you on IRC on #openstack-cognitive.
>
> Regards,
> Debo~ Dutta (on behalf of the initial team)
>
> [1] http://en.wikipedia.org/wiki/Machine_learning
> [2] https://github.com/openstack-dev/cookiecutter
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][QoS] Neutron QoS (Quality Of Service) update

2015-05-08 Thread Salvatore Orlando
On 7 May 2015 at 10:32, Miguel Ángel Ajo  wrote:

> Gal, thank you very much for the update to the list, I believe it’s very
> helpful,
> I’ll add some inline notes.
>
> On Thursday, 7 de May de 2015 at 8:51, Gal Sagie wrote:
>
> Hello All,
>
> I think that the Neutron QoS effort is progressing into critical point and
> i asked Miguel if i could post an update on our progress.
>
> First, i would like to thank Sean and Miguel for running this effort and
> everyone else that is involved, i personally think its on the right track,
> However i would like to see more people involved, especially more Neutron
> experienced members because i believe we want to make the right decisions
> and learn from past mistakes when making the API design.
>
> Feel free to update in the meeting wiki [1], and the spec review [2]
>
> *Topics*
>
> *API microversioning spec implications [3]*
> QoS can benefit from this design, however some concerns were raised that
> this will
> only be available at mid L cycle.
> I think a better view is needed how this aligns with the new QoS design and
> any feedback/recommendation is use full
>
> I guess an strategy here could be: go on with an extension, and translate
> that into
> an experimental API once micro versioning is ready, then after one cycle
> we could
> “graduate it” to get versioned.
>

Indeed. I think the guy who wrote the spec mentioned how to handle
extensions which are in the pipeline already, and has a kind word for QoS
in particular.

>
> *Changes to the QoS API spec: scoping into bandwidth limiting*
> At this point the concentration is on the API and implementation
> of bandwidth limiting.
>
> However it is important to keep the design easily extensible for some next
> steps
> like traffic classification and marking
> *.*
>
> This is important for architecting your data model, RPC interfaces, and to
some extent even the control plane.
>From a user perspective (and hence API design) the question to ask would be
whether a generic QoS API (for instance based on generic QoS policies which
might have a different nature) is better than an explicit ones - where you
would have distinct URIs for rate limiting, traffic shaping, marking, etc.

I am not sure of what could be the right answer here. I tend to think
distinct URIs are more immediate to use. On the other hand users will have
to learn more APIs, but even with a generic framework users will have to
learn how to create policies for different types of QoS policies.

>
> *Changes to the QoS API spec: modeling of rules (class hierarchy)
> (Guarantee split out)*
> There is a QoSPolicy which is composed of different QoSRules, there is
> a discussion of splitting the rules into different types like
> QoSRule.
> (This in order to support easy extension of this model by adding new type
> of rules which extend the possible parameters)
>
> Plugins can then separate optional aspects into separate rules.
> Any feedback on this approach is appreciated.
>
> *Discuss multiple API end points (per rule type) vs single*
>
>
> here, the topic name was incorrect, where I said API end points, we were
> meaning URLs or REST resources.. (thanks Irena for the correction)
>

So probably my previous comment applies here as well.

>
>
> In summary this means  that in the above model, do we want to support
> /v1/qosrule/..  or   /v1/qosrule/ API's
> I think the consensus right now is that the later is more flexible.
>
> Miguel is also checking the possibility of using something like:
> /v1/qosrule/type/... kind of parsing
> Feedback is desired here too :)
>
> *Traffic Classification considerations*
> The idea right now is to extract the TC classification to another data
> model
> and attach it to rule
> that way no need to repeat same filters for the same kind of traffic.
>
>
Didn't you say you were going to focus on rate limiting? ;)

>
> Of course we need to consider here what it means to "update" a classifier
> and not to introduce too much dependencies
>
> About this, the intention is not to fully model this, or to include it in
> the data model now,
> but try to see how could we do it in future iterations and see if it fits
> the current data model
> and APIs we’re proposing.
>

Can classifier management be considered an admin mgmt feature like instance
flavour?

>
>
>
> *The ingress vs egress differences and issues*
> Egress bandwidth limiting is much more use full and supported,
> There is still doubt on the support of Ingress bandwidth limiting in OVS,
> anyone
> that knows if Ingress QoS is supported in OVS we want your feedback :)
>
> I do not think so, but don't take my word for granted.
You can ping somebody in #openvswitch or post to ovs-disc...@openvswitch.org


> (For example implementing OF1.3 Metering spec)
>
> Thanks all (Miguel, Sean or anyone else, please update this if i forgot
> anything)
>
> [1] https://wiki.openstack.org/wiki/Meetings/QoS
> [2] https://review.openstack.org/#/c/88599/
> [3] https://review.openstack.org/#/c/136760/

Re: [openstack-dev] [neutron] Mechanism drivers and Neutron server forking?

2015-05-08 Thread Salvatore Orlando
Just like the Neutron plugin manager, also ML2 driver manager ensure
drivers are loaded only once regardless of the number of workers.
What Kevin did proves that drivers are correctly loaded before forking (I
reckon).

However, forking is something to be careful about especially when using
eventlet. For the plugin my team maintains we were creating a periodic task
during plugin initialisation.
This lead to an interesting condition where API workers were hanging [1].
This situation was fixed with a rather pedestrian fix - by adding a delay.

Generally speaking I would find useful to have a way to "identify" an API
worker in order to designate a specific one for processing that should not
be made redundant.
On the other hand I self-object to the above statement by saying that API
workers are not supposed to do this kind of processing, which should be
deferred to some other helper process.

Salvatore

[1] https://bugs.launchpad.net/vmware-nsx/+bug/1420278

On 8 May 2015 at 09:43, Kevin Benton  wrote:

> I'm not sure I understand the behavior you are seeing. When your mechanism
> driver gets initialized and kicks off processing, all of that should be
> happening in the parent PID. I don't know why your child processes start
> executing code that wasn't invoked. Can you provide a pointer to the code
> or give a sample that reproduces the issue?
>
> I modified the linuxbridge mech driver to try to reproduce it:
> http://paste.openstack.org/show/216859/
>
> In the output, I never received any of the init code output I added more
> than once, including the function spawned using eventlet.
>
> The only time I ever saw anything executed by a child process was actual
> API requests (e.g. the create_port method).
>
>
> On Thu, May 7, 2015 at 6:08 AM, Neil Jerram 
> wrote:
>
>> Is there a design for how ML2 mechanism drivers are supposed to cope with
>> the Neutron server forking?
>>
>> What I'm currently seeing, with api_workers = 2, is:
>>
>> - my mechanism driver gets instantiated and initialized, and immediately
>> kicks off some processing that involves communicating over the network
>>
>> - the Neutron server process then forks into multiple copies
>>
>> - multiple copies of my driver's network processing then continue, and
>> interfere badly with each other :-)
>>
>> I think what I should do is:
>>
>> - wait until any forking has happened
>>
>> - then decide (somehow) which mechanism driver is going to kick off that
>> processing, and do that.
>>
>> But how can a mechanism driver know when the Neutron server forking has
>> happened?
>>
>> Thanks,
>> Neil
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Kevin Benton
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][api] Extensions out, Micro-versions in

2015-05-06 Thread Salvatore Orlando
Thanks Bob.

Two answers/comments below.

On 6 May 2015 at 14:59, Bob Melander (bmelande)  wrote:

>  Hi Salvatore,
>
>  Two questions/remarks below.
>
>   From: Salvatore Orlando 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: onsdag 6 maj 2015 00:13
> To: OpenStack Development Mailing List 
> Subject: [openstack-dev] [neutron][api] Extensions out, Micro-versions in
>
>   #5 Plugin/Vendor specific APIs
>
>  Neutron is without doubt the project with the highest number of 3rd
> party (OSS and commercial) integration. After all it was mostly vendors who
> started this project.
> Vendors [4] use the extension mechanism to expose features in their
> products not covered by the Neutron API or to provide some sort of
> value-added service.
> The current proposal still allows 3rd parties to attach extensions to the
> neutron API, provided that:
> - they're not considered part of the Neutron API, in terms of versioning,
> documentation, and client support
>
>  BOB> There are today vendor specific commands in the Neutron CLI client.
> Such commands are prepended with the name of the vendor, like
> cisco_ and nec_.
> I think that makes it quite visible to the user that the command is
> specific to a vendor feature and not part of neutron core. Would it be
> possible to allow for that also going forward? I would think that from a
> user perspective it can be convenient to be able to access vendor add-on
> features using a single CLI client.
>

In a nutshell no, but maybe.
Vendor extensions are not part of the Neutron API, but if the community
decides to support them in the official client anyway, you will still be
able to run vendor-specific CLI commands. Otherwise vendors will have to
provide their own client tools, which is feasible as well.
Personally, I would be against having vendor-specific CLI commands in
python-neutronclient. To me it will be tantamount to saying: yes please do
versioning, but don't take extensions away from us.
However the developer, user, and operator community might have a different
opinion, and as usual the decision will derive from community consensus.


>
>- they do not redefine resources defined by the Neutron API.
>
>  BOB> Does “redefine" here include extending a resource with additional
> attributes?
>

In my opinion yes. But I do not have a very strong point here. Also,
enforcing this will require many vendors to do backward incompatible
changes in the API, and therefore we would need a deprecation cycle. So
let's say that ideally modifying the shape of neutron resource by adding
attributes, might be considered a "discouraged, but not forbidden"
practice. For instance if you want to attach a qos profile to a port rather
then adding a 'vendor_qos_profile' to the port resource you might add a
vendor_port_info resource with a reference to the vendor_qos_profile_id and
the neutron port_id.


>- they do not live in the neutron source tree
> The aim of the provisions above is to minimize the impact of such
> extensions on API portability.
>
>  Thanks for reading and thanks in advance for your feedback,
>  Salvatore
>
>  The title of this post has been inspired by [2]  (the message in the
> banner may be unintelligible to readers not fluent in european football)
>
>  [1] https://review.openstack.org/#/c/136760/
> [2]
> http://a.espncdn.com/combiner/i/?img=/photo/2015/0502/fc-banner-jd-1296x729.jpg&w=738&site=espnfc
> [3]
> http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/api-microversions.html
> [4] By "vendor" here we refer either to a cloud provider or a company
> providing Neutron integration for their products.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][api] Extensions out, Micro-versions in

2015-05-05 Thread Salvatore Orlando
Thanks Kevin,

answers inline.

On 6 May 2015 at 00:28, Fox, Kevin M  wrote:

>  so... as an operator looking at #3, If I need to support lbaas, I'm
> getting pushed to run more and more services, like octavia, plus a
> neutron-lbaas service, plus neutron? This seems like an operator
> scalability issue... What benifit does splitting out the advanced services
> into their own services have?
>

You have a valid point here. In the past I was keen on insisting that
neutron was supposed to be a management layer only service for most
networking services.
However, the consensus seems to move toward a microservices-style
architecture. It would be interesting to get some feedback regarding the
additional operational burden of managing a plethora of services, even if
it is worth noting that when one deploys neutron with its reference
architecture there are already plenty of moving parts.

Regardless, I need to slaps your hand because this discussion is not really
pertinent to this thread, which is specifically about having a strategy for
the Neutron API.
I would be happy to have a separate thread for defining a strategy for
neutron services. I'm pretty sure Doug will be more than happy to slap your
hands too.


> Thanks,
> Kevin
>  ------
> *From:* Salvatore Orlando [sorla...@nicira.com]
> *Sent:* Tuesday, May 05, 2015 3:13 PM
> *To:* OpenStack Development Mailing List
> *Subject:* [openstack-dev] [neutron][api] Extensions out, Micro-versions
> in
>
>   There have now been a few iterations on the specification for Neutron
> micro-versioning [1].
> It seems that no-one in the community opposes introducing versioning. In
> particular API micro-versioning as implemented by Nova and Ironic seems a
> decent way to evolve the API incrementally.
>
>  What the developer community seems not yet convinced about is moving
> away from extensions. It seems everybody realises the flaws of evolving the
> API through extensions, but there are understandable concerns regarding
> impact on plugins/drivers as well as the ability to differentiate, which is
> something quite dear to several neutron teams. I tried to consider all
> those concerns and feedback received; hopefully everything has been
> captured in a satisfactory way in the latest revision of [1].
> With this ML post I also seek feedback from the API-wg concerning the
> current proposal, whose salient points can be summarised as follows:
>
>  #1 extensions are not part anymore of the neutron API.
>
>  Evolution of the API will now be handled through versioning. Once
> microversions are introduced:
>- current extensions will be progressively moved into the Neutron
> "unified" API
>- no more extension will be accepted as part of the Neutron API
>
>  #2 Introduction of "features" for addressing diversity in Neutron plugins
>
>  It is possible that the combination of neutron plugins chosen by the
> operator won't be able to support the whole Neutron API. For this reason a
> concept of "feature" is included. What features are provided depends on the
> plugins loaded. The list of features is hardcoded as strictly dependent on
> the Neutron API version implemented by the server. The specification also
> mandates a minimum set of features every neutron deployment must provide
> (those would be the minimum set of features needed for integrating Neutron
> with Nova).
>
>  #3 Advanced services are still extensions
>
>  This a temporary measure, as APIs for load balancing, VPN, and Edge
> Firewall are still served through neutron WSGI. As in the future this API
> will live independently it does not make sense to version them with Neutron
> APIs.
>
>  #4 Experimenting in the API
>
>  One thing that has plagued Neutron in the past is the impossibility of
> getting people to reach any sort of agreement over the shape of certain
> APIs. With the proposed plan we encourage developers to submit experimental
> APIs. Experimental APIs are unversioned and no guarantee is made regarding
> deprecation or backward compatibility. Also they're optional, as a deployer
> can turn them off. While there are caveats, like forever-experimental APIs,
> this will enable developer to address user feedback during the APIs'
> experimental phase. The Neutron community and the API-wg can provide plenty
> of useful feeback, but ultimately is user feedback which determines whether
> an API proved successful or not. Please note that the current proposal goes
> in a direction different from that approved in Nova when it comes to
> experimental APIs [3]
>
>  #5 Plugin/Vendor specific APIs
>
>  Neutron is without doubt the project with the highest number of 3rd
> party (OSS and commercial) integrati

Re: [openstack-dev] [neutron] How should edge services APIs integrate into Neutron?

2015-05-05 Thread Salvatore Orlando
I think Paul is correctly scoping this discussion in terms of APIs and
management layer.
For instance, it is true that dynamic routing support, and BGP support
might be a prerequisite for BGP VPNs, but it should be possible to have at
least an idea of how user and admin APIs for this VPN use case should look
like.

In particular the discussion on service chaining is a bit out of scope
here. I'd just note that [1] seems to have a lot of overlap with
group-based-policies [2], and that it appears to be a service that consumes
Neutron rather than an extension to it.

The current VPN service was conceived to be fairly generic. IPSEC VPN is
the only implemented one, but SSL VPN and BGP VPN were on the map as far as
I recall.
Personally having a lot of different VPN APIs is not ideal for users. As a
user, I probably don't even care about configuring a VPN. What is important
for me is to get L2 or L3 access to a network in the cloud; therefore I
would seek for common abstractions that might allow a user for configuring
a VPN service using the same APIs. Obviously then there will be parameters
which will be specific for the particular class of VPN being created.

I listened to several contributors in the area in the past, and there are
plenty of opinions across a spectrum which goes from total abstraction
(just expose "edges" at the API layer) to what could be tantamount to a
RESTful configuration of a VPN appliance. I am not in a position such to
prescribe what direction the community should take; so, for instance, if
the people working on XXX VPN believe the best way forward for them is to
start a new project, so be it.

The other approach would obviously to build onto the current APIs. The only
way the Neutron API layer provides to do that is to extend and extension.
This sounds terrible, and it is indeed terrible. There is a proposal for
moving toward versioned APIs [3], but until that proposal is approved and
implemented extensions are the only thing we have.
>From an API perspective the mechanism would be simpler:
1 - declare the extension, and implement get_required_extension to put
'vpnaas' as a requirement
2 - implement a DB mixin for it providing basic CRUD operations
3 - add it to the VPN service plugin and add its alias to
'supported_extensions_aliases' (step 2 and 3 can be merged if you wish not
to have a mixin)

What might be a bit more challenging is defining how this reflects onto
VPN. Ideally you would have a driver for every VPN type you support, and
then have a little dispatcher to route the API call to the appropriate
driver according to the VPN type.

Salvatore

[1]
https://blueprints.launchpad.net/neutron/+spec/intent-based-service-chaining
[2] https://wiki.openstack.org/wiki/GroupBasedPolicy
[3] https://review.openstack.org/#/c/136760

On 6 May 2015 at 07:14, Vikram Choudhary 
wrote:

>  Hi Paul,
>
>
>
> Thanks for starting this mail thread.  We are also eyeing for supporting
> MPBGP in neutron and will like to actively participate in this discussion.
>
> Please let me know about the IRC channels which we will be following for
> this discussion.
>
>
>
> Currently, I am following below BP’s for this work.
>
> https://blueprints.launchpad.net/neutron/+spec/edge-vpn
>
> https://blueprints.launchpad.net/neutron/+spec/bgp-dynamic-routing
>
> https://blueprints.launchpad.net/neutron/+spec/dynamic-routing-framework
>
>
> https://blueprints.launchpad.net/neutron/+spec/prefix-clashing-issue-with-dynamic-routing-protocol
>
>
>
> Moreover, a similar kind of work is being headed by Cathy for defining an
> intent framework which can extended for various use case. Currently it will
> be leveraged for SFC but I feel the same can be used for providing intend
> VPN use case.
>
>
> https://blueprints.launchpad.net/neutron/+spec/intent-based-service-chaining
>
>
>
> Thanks
>
> Vikram
>
>
>
> *From:* Paul Michali [mailto:p...@michali.net]
> *Sent:* 06 May 2015 01:38
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* [openstack-dev] [neutron] How should edge services APIs
> integrate into Neutron?
>
>
>
> There's been talk in VPN land about new services, like BGP VPN and DM VPN.
> I suspect there are similar things in other Advanced Services. I talked to
> Salvatore today, and he suggested starting a ML thread on this...
>
>
>
> Can someone elaborate on how we should integrate these API extensions into
> Neutron, both today, and in the future, assuming the proposal that
> Salvatore has is adopted?
>
>
>
> I could see two cases. The first, and simplest, is when a feature has an
> entirely new API that doesn't leverage off of an existing API.
>
>
>
> The other case would be when the feature's API would dovetail into the
> existing service API. For example, one may use the existing vpn_service API
> to create the service, but then create BGP VPN or DM VPN connections for
> that service, instead of the IPSec connections we have today.
>
>
>
> If there are examples already of how to exte

[openstack-dev] [neutron][api] Extensions out, Micro-versions in

2015-05-05 Thread Salvatore Orlando
There have now been a few iterations on the specification for Neutron
micro-versioning [1].
It seems that no-one in the community opposes introducing versioning. In
particular API micro-versioning as implemented by Nova and Ironic seems a
decent way to evolve the API incrementally.

What the developer community seems not yet convinced about is moving away
from extensions. It seems everybody realises the flaws of evolving the API
through extensions, but there are understandable concerns regarding impact
on plugins/drivers as well as the ability to differentiate, which is
something quite dear to several neutron teams. I tried to consider all
those concerns and feedback received; hopefully everything has been
captured in a satisfactory way in the latest revision of [1].
With this ML post I also seek feedback from the API-wg concerning the
current proposal, whose salient points can be summarised as follows:

#1 extensions are not part anymore of the neutron API.

Evolution of the API will now be handled through versioning. Once
microversions are introduced:
   - current extensions will be progressively moved into the Neutron
"unified" API
   - no more extension will be accepted as part of the Neutron API

#2 Introduction of "features" for addressing diversity in Neutron plugins

It is possible that the combination of neutron plugins chosen by the
operator won't be able to support the whole Neutron API. For this reason a
concept of "feature" is included. What features are provided depends on the
plugins loaded. The list of features is hardcoded as strictly dependent on
the Neutron API version implemented by the server. The specification also
mandates a minimum set of features every neutron deployment must provide
(those would be the minimum set of features needed for integrating Neutron
with Nova).

#3 Advanced services are still extensions

This a temporary measure, as APIs for load balancing, VPN, and Edge
Firewall are still served through neutron WSGI. As in the future this API
will live independently it does not make sense to version them with Neutron
APIs.

#4 Experimenting in the API

One thing that has plagued Neutron in the past is the impossibility of
getting people to reach any sort of agreement over the shape of certain
APIs. With the proposed plan we encourage developers to submit experimental
APIs. Experimental APIs are unversioned and no guarantee is made regarding
deprecation or backward compatibility. Also they're optional, as a deployer
can turn them off. While there are caveats, like forever-experimental APIs,
this will enable developer to address user feedback during the APIs'
experimental phase. The Neutron community and the API-wg can provide plenty
of useful feeback, but ultimately is user feedback which determines whether
an API proved successful or not. Please note that the current proposal goes
in a direction different from that approved in Nova when it comes to
experimental APIs [3]

#5 Plugin/Vendor specific APIs

Neutron is without doubt the project with the highest number of 3rd party
(OSS and commercial) integration. After all it was mostly vendors who
started this project.
Vendors [4] use the extension mechanism to expose features in their
products not covered by the Neutron API or to provide some sort of
value-added service.
The current proposal still allows 3rd parties to attach extensions to the
neutron API, provided that:
- they're not considered part of the Neutron API, in terms of versioning,
documentation, and client support
- they do not redefine resources defined by the Neutron API.
- they do not live in the neutron source tree
The aim of the provisions above is to minimize the impact of such
extensions on API portability.

Thanks for reading and thanks in advance for your feedback,
Salvatore

The title of this post has been inspired by [2]  (the message in the banner
may be unintelligible to readers not fluent in european football)

[1] https://review.openstack.org/#/c/136760/
[2]
http://a.espncdn.com/combiner/i/?img=/photo/2015/0502/fc-banner-jd-1296x729.jpg&w=738&site=espnfc
[3]
http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/api-microversions.html
[4] By "vendor" here we refer either to a cloud provider or a company
providing Neutron integration for their products.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Do we need migrate script for neutron IPAM now?

2015-05-05 Thread Salvatore Orlando
Patch #153236 is introducing pluggable IPAM in the db base plugin class,
and default to it at the same time, I believe.

If the consensus is to default to IPAM driver then in order to satisfy
grenade requirements those migrations scripts should be run. There should
actually be a single script to be run in a one-off fashion. Even better is
treated as a DB migration.

However, the plan for Kilo was to not turn on pluggable IPAM for default.
Now that we are targeting Liberty, we should have this discussion again,
and not take for granted that we should default to pluggable IPAM just
because a few months ago we assumed it would be default by Liberty.
I suggest to not enable it by default, and then consider in L-3 whether we
should do this switch.
For the time being, would it be possible to amend patch #153236 to not run
pluggable IPAM by default. I appreciate this would have some impact on unit
tests as well, which should be run both for pluggable and "traditional"
IPAM.

Salvatore

On 4 May 2015 at 20:11, Pavel Bondar  wrote:

> Hi,
>
> During fixing failures in db_base_plugin_v2.py with new IPAM[1] I faced
> to check-grenade-dsvm-neutron failures[2].
> check-grenade-dsvm-neutron installs stable/kilo, creates
> networks/subnets and upgrades to patched master.
> So it validates that migrations passes fine and installation is works
> fine after it.
>
> This is where failure occurs.
> Earlier there was an agreement about using pluggable IPAM only for
> greenhouse installation, so migrate script from built-in IPAM to
> pluggable IPAM was postponed.
> And check-grenade-dsvm-neutron validates greyhouse scenario.
> So do we want to update this agreement and implement migration scripts
> from built-in IPAM to pluggable IPAM now?
>
> Details about failures.
> Subnets created before patch was applied does not have correspondent
> IPAM subnet,
> so observed a lot of failures like this in [2]:
> >Subnet 2c702e2a-f8c2-4ea9-a25d-924e32ef5503 could not be found
> Currently config option in patch is modified to use pluggable_ipam by
> default (to catch all possible UT/tempest failures).
> But before the merge patch will be switched back to non-ipam
> implementation by default.
>
> I would prefer to implement migrate script as a separate review,
> since [1] is already quite big and hard for review.
>
> [1] https://review.openstack.org/#/c/153236
> [2]
>
> http://logs.openstack.org/36/153236/54/check/check-grenade-dsvm-neutron/42ab4ac/logs/grenade.sh.txt.gz
>
> - Pavel Bondar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] _get_subnet() in OpenContrail tests results in port deletion

2015-05-04 Thread Salvatore Orlando
I think the first workaround is the solution we're looking for as it better
reflects the fact that opencontrail is a db-less plugin.
I hope it will be the easier too, but you can never be too sure with
neutron unit tests.

Salvatore

On 4 May 2015 at 12:56, Pavel Bondar  wrote:

> Hi Kevin,
>
> Thanks for your answer, that is what I was looking for!
> I'll check with you in irc to decide which workaround is better:
> 1. Mocking NeutronDbSubnet fetch_subnet for opencontrail tests.
> 2. Using session.query() directly in NeutronDbSubnet fetch_subnet.
>
> - Pavel Bondar
>
> On 30.04.2015 22:46, Kevin Benton wrote:
> > The OpenContrail plugin itself doesn't even use the Neutron DB. I
> > believe what you are observing is a side effect of the fake server they
> > have for their tests, which does inherit the neutron DB.
> >
> > When you call a method on the core plugin in the contrail unit test
> > case, it will go through their request logic and will be piped into the
> > fake server. During this time, the db session that was associated with
> > the original context passed to the core plugin will be lost do to its
> > conversion to a dict.[1, 2]
> >
> > So I believe what you're seeing is this.
> >
> > 1. The FakeServer gets create_port called and starts its transactions.
> > 2. It now hits the ipam driver which calls out to the neutron manager to
> > get the core plugin handle, which is actually the contrail plugin and
> > not the FakeServer.
> > 3. IPAM calls _get_subnet on the contrail plugin, which serializes the
> > context[1] and sends it to the FakeServer.
> > 4. The FakeServer code receives the request and deserializes the
> > context[2], which no longer has the db session.
> > 5. The FakeServer then ends up starting a new session to read the
> > subnet, which will interfere with the transaction you created the port
> > under since they are from the same engine.
> >
> > This is why you can query the DB directly rather than calling the core
> > plugin. The good news is that you don't have to worry because the actual
> > contrail plugin won't be using any of this logic so you're not actually
> > breaking anything.
> >
> > I think what you'll want to do is add a mock.patch for the
> > NeutronDbSubnet fetch_subnet method to monkey patch in a reference to
> > their FakeServer's _get_subnet method. Ping me on IRC (kevinbenton) if
> > you need help.
> >
> > 1.
> >
> https://github.com/openstack/neutron/blob/master/neutron/plugins/opencontrail/contrail_plugin.py#L111
> > 2.
> >
> https://github.com/openstack/neutron/blob/master/neutron/tests/unit/plugins/opencontrail/test_contrail_plugin.py#L121
> >
> > On Thu, Apr 30, 2015 at 6:37 AM, Pavel Bondar  > > wrote:
> >
> > Hi,
> >
> > I am debugging issue observed in OpenContrail tests[1] and so far it
> > does not look obvious.
> >
> > Issue:
> >
> > In create_port[2] new transaction is started.
> > Port gets created, but disappears right after reading subnet from
> plugin
> > in reference ipam driver[3]:
> >
> > >plugin = manager.NeutronManager.get_plugin()
> > >return plugin._get_subnet(context, id)
> >
> > Port no longer seen in transaction, like it never existed before
> > (magic?). As a result inserting IPAllocation fails with foreing key
> > constraint error:
> >
> > DBReferenceError: (IntegrityError) FOREIGN KEY constraint failed
> > u'INSERT INTO ipallocations (port_id, ip_address, subnet_id,
> network_id)
> > VALUES (?, ?, ?, ?)' ('aba6eaa2-2b2f-4ab9-97b0-4d8a36659363',
> > u'10.0.0.2', u'be7bb05b-d501-4cf3-a29a-3861b3b54950',
> > u'169f6a61-b5d0-493a-b7fa-74fd5b445c84')
> > }}}
> >
> > Only OpenContrail tests fail with that error (116 failures[1]). Tests
> > for other plugin passes fine. As I see OpenContrail is different from
> > other plugins: each call to plugin is wrapped into http request, so
> > getting subnet happens in another transaction. In tests
> requests.post()
> > is mocked and http call gets translated into self.get_subnet(...).
> > Stack trace from plugin._get_subnet() to db_base get_subnet() in open
> > contrail tests looks next[4].
> >
> > Also single test failure with full db debug was uploaded for
> > investigation[5]:
> > - Port is inserted at 362.
> > - Subnet is read by plugin at 384.
> > - IPAllocation was tried to be inserted at 407.
> > Between Port and IPAllocation insert no COMMIT/ROLLBACK or delete
> > statement were issued, so can't find explanation why port no longer
> > exists on IPAllocation insert step.
> > Am I missing something obvious?
> >
> > For now I have several workarounds, which are basically do not use
> > plugin._get_subnet(). Direct session.query() works without such side
> > effects.
> > But this issue bothers me much since I can't explain why it even
> happens
> > in OpenContrail tests.
> > Any ideas are welcome!
> >

Re: [openstack-dev] [neutron][lbaas][tempest] Data-driven testing (DDT) samples

2015-05-04 Thread Salvatore Orlando
Among the OpenStack project of which I have some knowledge, none of them
uses any DDT library.
If you think there might be a library from which lbaas, neutron, or any
other openstack project might take advantage, we should consider it.

Salvatore

On 14 April 2015 at 20:33, Madhusudhan Kandadai <
madhusudhan.openst...@gmail.com> wrote:

> Hi,
>
> I would like to start a thread for the tempest DDT in neutron-lbaas tree.
> The problem comes in when we have testcases for both admin/non-admin user.
> (For example, there is an ongoing patch activity:
> https://review.openstack.org/#/c/171832/). Ofcourse it has duplication
> and want to adhere as per the tempest guidelines. Just wondering, whether
> we are using DDT library in other projects, if it is so, can someone please
> point me the sample code that are being used currently. It can speed up
> this DDT activity for neutron-lbaas.
>
> In the meantime, I am also gathering/researching about that. Should I have
> any update, I shall keep you posted on the same.
>
> Thanks,
> Madhusudhan
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Question about tempest API tests that establish SSH connection to instances

2015-04-28 Thread Salvatore Orlando
At a first glance it seems run_ssh is disabled in gate tests [1]. I could
not find any nova job where it is enabled.
These tests are therefore skipped. For what is worth they might be broken
now. Sharing a traceback or filing a bug might help.

Salvatore

[1]
http://logs.openstack.org/81/159481/2/check/check-tempest-dsvm-neutron-full/85e039c/logs/testr_results.html.gz

On 28 April 2015 at 10:26, Yaroslav Lobankov  wrote:

> Hi everyone,
>
> I have a question about tempest tests that are related to instance
> validation. Some of these tests are
>
>
> tempest.api.compute.servers.test_create_server.ServersTestJSON.test_host_name_is_same_as_server_name[gate,id-ac1ad47f-984b-4441-9274-c9079b7a0666]
>
> tempest.api.compute.servers.test_create_server.ServersTestJSON.test_verify_created_server_vcpus[gate,id-cbc0f52f-05aa-492b-bdc1-84b575ca294b]
>
> tempest.api.compute.servers.test_create_server.ServersTestManualDisk.test_host_name_is_same_as_server_name[gate,id-ac1ad47f-984b-4441-9274-c9079b7a0666]
>
> tempest.api.compute.servers.test_create_server.ServersTestManualDisk.test_verify_created_server_vcpus[gate,id-cbc0f52f-05aa-492b-bdc1-84b575ca294b]
>
> To enable these tests I should set the config option "run_ssh" to True.
> When I set the option to true and ran the tests, all the tests failed. It
> looks like ssh code in API tests doesn't work.
> Maybe I am wrong. The question is the following: which of tempest jobs
> runs these tests?  Maybe I have tempest misconfiguration.
>
> Regards,
> Yaroslav Lobankov.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Keystone] [Nova] How to validate teanant-id for admin operation

2015-04-27 Thread Salvatore Orlando
I believe German is referring to the case where a user performs an
operation on behalf of some other project to whom it bears no relationship.
In this case the user performing the operation authenticates with keystone
with a project_id which is not the one for which the operation is being
performed.

This happens in project like neutron, where a 'tenant_id' parameter can be
included in the request body.
In CLI terms this is done in the following way:

neutron net-create  --tenant-id 

Note that --tenant-id here is not the usual '--os-tenant-id' parameter.
Therefore it is not sent to keystone for validation and authentication.
Keystone just authenticates the admin user with its own project. Neutron
then lets 'admin' users do everything with anything, including creating
networks and other objects for other tenants, which to neutron are just
plain strings.

For instance:

salvatore@ubuntu:~/devstack$ neutron net-create --tenant-id meh ciccio
Created a new network:
+---+--+
| Field | Value|
+---+--+
| admin_state_up| True |
| id| 60d3cfc0-1a75-4a78-920d-edc11ea3fc2d |
| name  | ciccio   |
| tenant_id | meh  |
+---+--+

Neutron is not alone in this behaviour. For instance, glance allows image
owners' to share them with a tenant which is not validated with keystone as
well:

salvatore@ubuntu:~/devstack$ glance member-create
667046ae-d8b1-4ef4-925e-a1c857fd45fa meh
salvatore@ubuntu:~/devstack$ glance member-list --image
667046ae-d8b1-4ef4-925e-a1c857fd45fa
+--+---+---+
| Image ID | Member ID | Can Share |
+--+---+---+
| 667046ae-d8b1-4ef4-925e-a1c857fd45fa | meh   |   |
+--+---+---+

On the other hand I believe keystone developers are advocating for a
behaviour like the following:

salvatore@ubuntu:~/devstack$ nova --os-project-id
4704447e0f7e48558cf15fe63341f412 boot --image
 667046ae-d8b1-4ef4-925e-a1c857fd45fa --flavor 42 --nic
net-id=5aff7242-97f6-48be-9d82-c06a28a7f1cf meh
+--++
| Property | Value
 |
+--++
| id   |
34ea6810-01a8-4cfd-b6fa-207ff9f68bac   |
| image| cirros-0.3.2-x86_64-uec
(667046ae-d8b1-4ef4-925e-a1c857fd45fa) |
| name | meh
 |
| tenant_id| 4704447e0f7e48558cf15fe63341f412
|
| user_id  | aa4cac3a2fbd43c0b90fd6ebed44d6ba
|
+--++

Which is made possible by:

salvatore@ubuntu:~/devstack$ keystone user-role-list --user admin --tenant
demo
+--+---+--+--+
|id|  name | user_id
   |tenant_id |
+--+---+--+--+
| f91adfeb71ad462db8f8f7dc1e25b97e | admin |
aa4cac3a2fbd43c0b90fd6ebed44d6ba | 4704447e0f7e48558cf15fe63341f412 |
+--+---+--+--+

I believe Neutron should move away from letting admin user 'own' the whole
system. Also, since several projects already adopt a model in which user
explicitly have roles in multiple projects, this should not be cause of any
pain for operators.
I therefore think that the solution for the problem with validation of the
--tenant-id parameter is that we need to get rid of it. From a neutron
perspective this should be done in a backward compatible way. To this aim,
we can even start thinking about versioning the API... If not we can always
add an extension that removes the tenant-id attribute... we can even call
it a "un-extension"... wouldn't that be wonderful?

Generally speaking this is not the first time this topic comes around. I
think we should now really address it, if nothing else because neutron is
disaligned with other openstack projects. As an operator it is far from
ideal that when deploying neutron you have to cons

Re: [openstack-dev] Please stop reviewing code while asking questions

2015-04-24 Thread Salvatore Orlando
On 24 April 2015 at 16:50, Chris Friesen 
wrote:

> On 04/24/2015 07:26 AM, Salvatore Orlando wrote:
>
>  If you think it might be beneficial to adjust tooling to that these
>> "contributions" get counted this is fine by me. I just wanted to point
>> out that
>> I do not consider those contributions at all (and btw it would be at
>> least more
>> polite to put a +1 rather than a -1).
>>
>
> If you're asking a question to elicit information, then it's quite
> possible you don't have enough information for a +1 yet.


This makes sense in general. I was referring to the specific cases posed by
Julien - curiosities, pedantry, or questions unrelated to the scope of the
patch.
Julien clarified that there actually questions which grant a -1, and surely
never a +1. For instance the kind of "what if" questions listed by Doug. In
this case it make sense for a reviewer to put a hold a patch while waiting
for an answer.


>
> Chris
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [Nova] [Cinder] [tc] Should Openstack project maintained by core team keep only API/DB in the future?

2015-04-24 Thread Salvatore Orlando
On 24 April 2015 at 15:13, Kyle Mestery  wrote:

> On Fri, Apr 24, 2015 at 4:06 AM, loy wolfe  wrote:
>
>> It's already away from the original thread, so I start this new one,
>> also with some extra tag because I think it touch some corss-project
>> area.
>>
>> Original discuss and reference:
>> http://lists.openstack.org/pipermail/openstack-dev/2015-April/062384.html
>>
>> https://review.openstack.org/#/c/176501/1/specs/liberty/reference-split.rst
>>
>> Background summary:
>> All in-tree implementation would be splitted from Openstack
>> networking, leaving Neutron as a naked "API/DB" platform, with a list
>> of out-tree implementation git repos, which are not maintained by core
>> team any more, but may be given a nominal "big tent" under the
>> Openstack umbrella.
>>
>>
> I'm not sure what led you to this discussion, but it's patently incorrect.
> We're going to split the in-tree reference implementation into a separate
> git repository. I have not said anything about the current core revewier
> team not being responsible for that. It's natural to evolve to a core
> reviewer team which cares deeply about that, vs. those who care deeply
> about the DB/API layer. This is exactly what happened when we split out the
> advanced services.
>

This discussion seems quite similar to that we had about non-reference
plugins.
Following the linux analogy you mention below Neutron should have been
deprived of its plugins and drivers. And indeed, regardless of what it
seems, it hasn't. Any user can still grab drivers as before. They just
reside in different repos. This is not different, imho, from the concept of
maintainers that linux has.
Besides you make it look at like as if the management layer (API/DB) is
just a tiny insignificant piece of software. I disagree quite strongly
here, but perhaps it's just me seeing in Neutron's mgmt layer something
more than what is actually is.


>
>
>> Motivation: a) Smaller core team only focus on the in-tree API/DB
>> definition, released from concrete controlling function
>> implementation; b) If there is official implementation inside Neutron,
>> 3rd external SDN controller would face the competition.
>>
>
Perhaps point (b) is a bit unclear. Are you stating that having this
control plane in Neutron gives it a "better placement" compared with other
solutions?


>
>> I'm not sure whether it's exactly what cloud operators want the
>> Openstack to deliver. Do they want a off-the-shelf package, or just a
>> framework and have to take the responsibility of integrating with
>> other external controlling projects? A analogy with Linux that only
>> kernel without any device driver has no use at all.
>>
>>
> We're still going to deliver ML2+OVS/LB+[DHCP, L3, metadata] agents for
> Liberty. I'm not sure where your incorrect assumption on what we're going
> to deliver is coming from.
>

I would answer with a different analogy - nova. Consider the various agents
as if it were libvirt. Like libvirt is a component which you use to control
your hypervisor, the agents control the data plane (OVS and utilities like
iptables/conntrack/dnsmasq/etc). With this analogy I believe Neutron's
"reference" control plane deserves to live on its own, just like nobody
would ever think that a libvirt implementation within nova is something
sane,
However, ML2 is a different beast. It has inside management and control
logic, we'll need a good surgeon there. Pretty sure our refactoring fans
are already drooling at the thought of cutting apart another component.


>
>
>> There are already many debates about nova-network to Neutron parity.
>> If largely used OVS and LB driver is out of tree and has to be
>> integrated separately by customers, how do those they migrate from
>> nova network? Standalone SDN controller has steep learning curve, and
>> a lot of users don't care which one is better of ODL vs. OpenContrail
>> to be integrated, they just want Openstack package easy to go by
>> default in tree implementation,  and are ready to drive all kinds of
>> opensource or commercial backends.
>>
>
I'm not sure what you mean here. In your opinions do operator want
something that works and provides everything out of the box, and want
something which is able to driver open source and commercial backends.
And besides I do not see the complication from operators arising from this
proposal. It's not like they have to maintain another component - indeed
from an operator perspective l3 agents, dhcp agents, and so on are already
different components to maintain (and that's one of the pain points they
feel in using neutron)

>
>>
> Do you realize that ML2 is plus the L2 agent is an SDN controller already?
>
>
>> BTW: +1 to henry and mathieu, that indeed Openstack is not responsible
>> projects of switch/router/fw, but it should be responsible for
>> scheduling, pooling, and driving of those backends, which is the same
>> case with Nova/Cinder scheduler and compute/volume manager. These
>> controlling functions shouldn't be 

Re: [openstack-dev] Please stop reviewing code while asking questions

2015-04-24 Thread Salvatore Orlando
On 24 April 2015 at 14:11, Russell Bryant  wrote:

> On 04/24/2015 07:21 AM, Amrith Kumar wrote:
> > We had a hypothesis about why +0 was rarely used (never conclusively
> > proved). Our hypothesis was that since Stackalytics didn't count +0's
> > it led to an increased propensity to -1 something. It would be
> > wonderful if we could try the experiment of giving credit for 0's and
> > seeing if it changes behavior.
>
> I think this makes a lot of sense.  These stats really do drive
> behavior.  I'd certainly be open to a patch to reviewstats [1] to count
> +0 comments and I think it would be good for stackalytics to consider
> the same.
>
> [1] http://git.openstack.org/cgit/openstack-infra/reviewstats



Frankly I think that it is an annoying behaviour to set a score so that
your act of asking a question or nit-picking a patch gets counted.
Even if internally in project teams we do count these stats, rest assured
that we also verify the quality that lies between those numbers.
A contributor who does proof-reading of 600 commit messages a month surely
won't be promoted to any core team.

If you think it might be beneficial to adjust tooling to that these
"contributions" get counted this is fine by me. I just wanted to point out
that I do not consider those contributions at all (and btw it would be at
least more polite to put a +1 rather than a -1).
It is my opinion that the kind of negative scores pointed out by Ihar and
Julien should just be ignored. As a core reviewer for Openstack/Neutron
I've been actually doing so for a while - I hope now I won't be accused of
being community un-friendly ;)

Salvatore


>
> --
> Russell Bryant
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   3   4   5   >