Re: [openstack-dev] [tc] Leadership training proposal/info

2016-03-07 Thread Colette Alexander
On Fri, Mar 4, 2016 at 12:54 PM, Colette Alexander <
colettealexan...@gmail.com> wrote:
>
> Current status, btw is 5 TC members as a 'yes' for going on April 21/22  -
> the Thurs/Fri before the summit. 4 additional people have mentioned they're
> interested, but unable to make the scheduling work for this particular
> time.
>
> I also have heard from some core reviewers and PTLs who are eager to
> participate, and who've asked to attend if it's possible.
>
>
Hi everyone,

Just following up with this - I've been in touch with every member of the
TC and there's been a pretty significant interest in the training itself,
and a desire from folks who can't make it in April for pushing dates back
to accommodate as many members as possible.

So - I'm going to suggest we hold off on training until June, with the
understanding that current TC members will be invited as well as those
elected to the TC In April. I'll be checking in on preferred dates with
everyone in the next few days.

Thanks everyone, for all of the feedback and interest!

-colette
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][TaaS] Possible points to be considered for TaaS Spec

2016-03-07 Thread reedip banerjee
While reading up the specs in [1] and [2], there are certain things which
we may need to discuss before proceeding forward

a) Reference point for Ingress/Egress traffic:
There may be some confusion related to how we are labelling
Ingress and Egress ( is it with respect to a VM, with a switch ,
or any other entity).
As we are looking from "Inside the VM" and not from "Inside the Network",
that needs to be made clear.

b) How to perceive TaaS:
In the section "Proposed Changes" Taas has been compared with a Core Neutron
Plugin ( L3-Router) and a plugin which has emerged out of Neutron ( Neutron
LBaaS).
This might cause confusion to the reviewers. It would be better that we
decide
how we would like to demonstrate TaaS:
- Is it a plugin which can be integrated with the Neutron Core
- Or is it an extension of the Core Neutron Services which can be used by
selected users

Based on the decision, we can modify the explanation to make the spec a bit
more streamed.

c) Device Owner for TaaS:
- If Tap Service creates a "destination" port, the port would have a device
owner
of the format of "network:tap"
- If the "destination" port is now connected to a VM and the VM is booted
up, nova
changes the owner to "compute:nova"

# Is there any impact of the change of the device_owner
# If there is an impact, should there be a change in nova so that the
device_owner is not modified
# When in the future, TaaS supports user providing an "already created
port" should the device owner
be checked and modified?

d) Outcome of Deleting the VM where TaaS operates
Following might be added to the Spec:

1. Deletion of the VM (and port attched to it) from which we were mirroring
(source of the mirror):
In this case we would do a cascade delete of the 'Tap_Flow' instances that
were associated with the port that was deleted.

2. Deletion of the VM (and port attched to it) to which we were mirroring
(Destination of the mirror):
In this case we would do a cascade delete of the 'Tap_Service' instance
that was associated with the port that was deleted.

e) Making the API independent of OpenVSwitch
As per our last discussion [3], it is better that w esplit our
implementation for TaaS,
so that
 # the focus is not limited to OpenVSwitch, which may be a point of concern
during review
 # allow other vendors to create thier own pluggable implementation

f) Choice of Tapping before/after Sec Groups

Security Groups can filter a lot , and implementing TaaS before or after
the SG
can impact the overall monitoring.
As referenced in [1], we can provide this option as a future course of
work, and
in the meanwhile specify the option which we are working upon  ( Before or
After)
in the spec, to make it clear.




[1]:https://review.openstack.org/#/c/96149/8/specs/juno/tap-as-a-service.rst
[2]:
https://review.openstack.org/#/c/256210/5/specs/mitaka/tap-as-a-service.rst
[3]:
http://eavesdrop.openstack.org/meetings/taas/2016/taas.2016-03-02-06.33.log.txt

-- 
Thanks and Regards,
Reedip Banerjee
IRC: reedip
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nova does not add ports to br-int after a neutron-ovs-cleanup

2016-03-07 Thread Kevin Benton
That script isn't meant to be run on a system with running VMs. It's to
clean leftover stuff on startup or before reconfiguration.

On Mon, Mar 7, 2016 at 4:52 PM, Sergio Morales Acuña 
wrote:

> Hi.
>
> After running a  neutron-ovs-cleanup and restarting openvswitch-agent and
>  nova-compute with all the VMs running,  nova does not add existing ports
> to br-int.
>
> All qvb@qvo exists but, after restarting nova-compute, only qvo from new
> machines are added correctly to the br-int.
>
> If I reboot the physical node all the qvo's are added to br-int without
>  problem.
>
> Is there any way to force this procedure (add qvos to br-int port)
> manually?
>
> P.D.: No errors detected in the logs.
> P.D.: I'm using Liberty from RDO.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] How do we move forward with xstatic releases?

2016-03-07 Thread Richard Jones
Two things I forgot to mention:

Currently there is another break point in the diagram of releases when X1
is released. Currently Horizon does not use upper-constraints and thus will
immediately pick up the new xstatic release file, potentially breaking
things. This is easy to fix - I will be proposing a patch soon.

SOLUTION 6 - make zuul capable of performing atomic cross-repository
commits.

Richard

Sent from my portable device, please excuse the brevity.
On 8 Mar 2016 15:52, "Richard Jones"  wrote:

> We've solved *most* of the issues around releasing new xstatic packages,
> documented here[1] (and related documentation).
>
> We have one final issue that's blocking us, which is that during the
> xstatic release there will be a point at which Horizon may be broken from
> an integrated point of view - some of the interfaces may not work and fail
> tests. The process goes something like this:
>
> ​Note: this assumes that depends-on can reliably bring in patches from all
> over the place into a gate environment, which is technically possible, but
> not necessarily correct today.
>
> The problem is that because we can't atomically update both
> upper-constraints *and* Horizon at the same time (or upper-constraints
> *and* xstatic-angular, or Horizon *and* xstatic-angular) we run into a
> situation where Horizon will be running against the wrong version of
> xstatic-angular.
>
> So we break one of the basic assumptions of the OpenStack world: that
> every single commit in every repository for the integrated environment will
> pass tests.
>
> In the Python world, we code around this by making Horizon compatible with
> both the X and X1 versions of xstatic-angular (if it was a Python library).
> In Javascript land this is much more difficult - Javascript libraries tend
> to break compatibility in far more interesting ways. Maintaining
> compatibility across CSS and font releases is also a difficult problem,
> though changes here are unlikely to break Horizon enough that gate tests
> would fail. So, solution 1 to the problem is:
>
> SOLUTION 1: always maintain Horizon compatibility across xstatic library
> releases.
>
> This is potentially very difficult to guarantee. So, a second solution has
> been proposed:
>
> SOLUTION 2: move the upper-constraints information for *the xstatic
> libraries only* from the global upper-constraints file into Horizon's
> repository.
>
> This allows us to atomically update both Horizon and the xstatic library
> version, removing the potential to break because of the version mismatch.
> Unfortunately it means that we have version compatibility issues with
> anything else that wants to install alongside Horizon that also uses
> xstatic packages. For example, Horizon plugins. We could move Horizon
> plugins into the Horizon repository to solve this. /ducks
>
> A variation on this solution is:
>
> SOLUTION 3: allow Horizon to locally override upper-constraints for the
> time needed to merge a patch in devstack.
>
> This solution allows Horizon to atomically update itself and the xstatic
> library, but it also means installing Horizon in a CI/CD environment
> becomes more difficult due to the need to always pull down the
> upper-constraints file and edit it. We could code this in to tox, but that
> doesn't help eg. devstack which needs to also do this thing.
>
> SOLUTION 4: vendor the javascript
>
> Heh.
>
> SOLUTION 5: have dependencies on xstatic move from global requirements to
> Horizon
>
> This is similar to 2 & 3 with some of the same drawbacks (multiple users
> of xstatic) but also we'd need a change to global-requirements handling to
> ignore some entries in Horizon's requirements.
>
> Your thoughts on any and all of the above are requested.
>
>
> Richard
>
> [1] https://review.openstack.org/#/c/289142/
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] exception for backporting upgrades to liberty/stable

2016-03-07 Thread Michal Rostecki

On 03/07/2016 05:22 PM, Michał Jastrzębski wrote:

Upgrades are required, true, but not necessarily automatic ones.
People still can rebuild and redeploy containers using normal deploy.
It will be downtime causing and less optimal, but possible. Also with
backport of named volumes it won't be data-destroying. It will cause
total downtime of APIs, but well, it's first version.

So I'm -1 to porting it to 1.1.0.

But I would suggest another option, namely 1.2.0 with automatic
upgrades we have now. It will allow upgrade 1.1.0->1.2.0 and it will
not add more work to 1.1.0 which we need asap (we need it well tested
by Austin summt AT MOST). Adding upgrades might make it tight,
especially that infra upgrades aren't finished yet in master.

Cheers,
Michal



I like this idea.

I'm OK with the idea of backporting upgrade playbooks to stable/liberty 
in general. But I think that the other Michal made a valid point that 
1.1.0 has to be well tested soon and in the same moment we don't have 
all upgrades yet even in master.


So,
-1 for backporting to 1.1.0
+1 for backporting later to 1.2.0

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] How do we move forward with xstatic releases?

2016-03-07 Thread Richard Jones
We've solved *most* of the issues around releasing new xstatic packages,
documented here[1] (and related documentation).

We have one final issue that's blocking us, which is that during the
xstatic release there will be a point at which Horizon may be broken from
an integrated point of view - some of the interfaces may not work and fail
tests. The process goes something like this:

​Note: this assumes that depends-on can reliably bring in patches from all
over the place into a gate environment, which is technically possible, but
not necessarily correct today.

The problem is that because we can't atomically update both
upper-constraints *and* Horizon at the same time (or upper-constraints
*and* xstatic-angular, or Horizon *and* xstatic-angular) we run into a
situation where Horizon will be running against the wrong version of
xstatic-angular.

So we break one of the basic assumptions of the OpenStack world: that every
single commit in every repository for the integrated environment will pass
tests.

In the Python world, we code around this by making Horizon compatible with
both the X and X1 versions of xstatic-angular (if it was a Python library).
In Javascript land this is much more difficult - Javascript libraries tend
to break compatibility in far more interesting ways. Maintaining
compatibility across CSS and font releases is also a difficult problem,
though changes here are unlikely to break Horizon enough that gate tests
would fail. So, solution 1 to the problem is:

SOLUTION 1: always maintain Horizon compatibility across xstatic library
releases.

This is potentially very difficult to guarantee. So, a second solution has
been proposed:

SOLUTION 2: move the upper-constraints information for *the xstatic
libraries only* from the global upper-constraints file into Horizon's
repository.

This allows us to atomically update both Horizon and the xstatic library
version, removing the potential to break because of the version mismatch.
Unfortunately it means that we have version compatibility issues with
anything else that wants to install alongside Horizon that also uses
xstatic packages. For example, Horizon plugins. We could move Horizon
plugins into the Horizon repository to solve this. /ducks

A variation on this solution is:

SOLUTION 3: allow Horizon to locally override upper-constraints for the
time needed to merge a patch in devstack.

This solution allows Horizon to atomically update itself and the xstatic
library, but it also means installing Horizon in a CI/CD environment
becomes more difficult due to the need to always pull down the
upper-constraints file and edit it. We could code this in to tox, but that
doesn't help eg. devstack which needs to also do this thing.

SOLUTION 4: vendor the javascript

Heh.

SOLUTION 5: have dependencies on xstatic move from global requirements to
Horizon

This is similar to 2 & 3 with some of the same drawbacks (multiple users of
xstatic) but also we'd need a change to global-requirements handling to
ignore some entries in Horizon's requirements.

Your thoughts on any and all of the above are requested.


Richard

[1] https://review.openstack.org/#/c/289142/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cross-project] Meeting SKIPPED, Tue March 8th, 21:00 UTC

2016-03-07 Thread Mike Perez
Hi all!

We will be skipping the cross-project meeting since there are no agenda items
to discuss, but someone can add one [1] to call a meeting next time. Thanks!

[1] - 
https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting#Proposed_agenda

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kolla] DocImpact in reviews

2016-03-07 Thread Vikram Hosakote (vhosakot)
Good point.  I think this should be caught during code review.

Regards,
Vikram Hosakote
IRC: vhosakot

From: Swapnil Kulkarni >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Monday, March 7, 2016 at 1:40 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [Kolla] DocImpact in reviews

Hello,

I was triaging some of the bugs reported to Kolla and I found a couple
of bugs [1] [2] created due to incorrect usage of DocImpact[1] tag in
reviews. The DocImapct tag should only be used in case if you have
done some change which requires following two,

1) Update in some existing documentation
2) Add some new documentation

which are not addressed in the current changeset where you are adding
the DocImapct.


[1] https://wiki.openstack.org/wiki/Documentation/DocImpact
[2] https://bugs.launchpad.net/kolla/+bug/1553291
[3] https://bugs.launchpad.net/kolla/+bug/1553405

Best Regards,
Swapnil Kulkarni
irc : coolsvap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] exception for backporting upgrades to liberty/stable

2016-03-07 Thread Martin André
On Tue, Mar 8, 2016 at 2:41 AM, Steven Dake (stdake) 
wrote:

>
>
> On 3/7/16, 10:16 AM, "Paul Bourke"  wrote:
>
> >This is a messy topic. I feel there's been some miscommunication and
> >confusion on the issue which hopefully I can sum up.
> >
> >As far as I remember it, Sam is correct, we did always plan to do
> >Liberty upgrades. However, over the course of time post Tokyo these
> >plans didn't really materialise, at which point Micheal kindly stepped
> >forward to kick things into action [0].
> >
> >Between now and then the focus shifted to "how do we get from Liberty to
> >Mitaka", and the original discussion of moving between minor Liberty
> >releases fell by the wayside. I think this is where the confusion has
> >arisen.
>
> This is accurate.  It wasn¹t a matter of not materializing, however, we
> were capacity constrained.  We will have upgrades working for Mitaka but
> it required a lot of everyone's time.
>

The initial work on upgrade took some time, but that doesn't mean it's not
an easy backport to Liberty. As far as I can tell it's perfectly isolated
code and can be backported without too much trouble.

I'd like to stick to what we agreed in Tokyo, I'm +1 on backporting upgrade
playbooks to 1.1.0.

Martin


> Regards
> -stev
>
> >
> >As I mentioned before I have been opposed to backporting features to
> >stable/liberty, as this is against the philosophy of a stable branch
> >etc. etc. However, as has been mentioned many times before, this is a
> >new project, hindsight is a great thing. Ideally users running Liberty
> >who encounter a CVE could be told to go to Mitaka, but in reality this
> >is an unreasonable expectation and operators who have gone on our
> >previous release notes that Liberty is ready to rock will feel screwed
> >over.
> >
> >So based on the above I am +1 to get upgrades into Liberty, I hope this
> >makes sense.
> >
> >Regards,
> >-Paul
> >
> >[0]
> >
> http://lists.openstack.org/pipermail/openstack-dev/2015-December/081467.ht
> >ml
> >
> >On 07/03/16 16:00, Sam Yaple wrote:
> >> On Mon, Mar 7, 2016 at 3:03 PM, Steven Dake (stdake)  >> > wrote:
> >>
> >> Hi folks,
> >>
> >> It was never really discussed if we would back-port upgrades to
> >> liberty.  This came up during  an irc conversation Friday [1], and a
> >> vote was requested.  Tthe facts of the discussion distilled are:
> >>
> >>   * We never agreed as a group to do back-port of upgrade during our
> >> back-port discussion
> >>   * An operator that can't upgrade her Z version of Kolla (1.1.1
> >> from 1.1.0) is stuck without CVE or OSSA corrections.
> >>   * Because of a lack of security upgrades, the individual
> >> responsible for executing the back-port would abandon the work
> >> (but not tuse the abaondon feature for of gerrit for changes
> >> already in the queue)
> >>
> >> Since we never agreed, in that IRC discussion a vote was requested,
> >> and I am administering the vote.  The vote request was specifically
> >> should we have a back-port of upgrade in 1.1.0.  Both parties agreed
> >> they would live with the results.
> >>
> >> I would like to point out that not porting upgrades means that the
> >> liberty branch would essentially become abandoned unless some other
> >> brave soul takes up a backport.  I would also like to point out that
> >> that is yet another exception much like thin-containers back-port
> >> which was accepted.  See how exceptions become the way to the dark
> >> side.  We really need to stay exception free going forward (in
> >> Mitaka and later) as much as possible to prevent expectations that
> >> we will make exceptions when none should be made.
> >>
> >> This is not an exception. This was always a requirement. If you disagree
> >> with that then we have never actually had a stable branch. The fact is
> >> we _always_ needed z version upgrades for Kolla. It was _always_ the
> >> plan to have them. Feel free to reference the IRC logs and our prior
> >> mid-cycle and our Tokyo upgrade sessions. What changed was time and
> >> peoples memories and that's why this is even a conversation.
> >>
> >> Please vote +1 (backport) or ­1 (don¹t backport).  An abstain in
> >> this case is the same as voting ­1, so please vote either way.  I
> >> will leave the voting open for 1 week until April 14th.  If there I
> >> a majority in favor, I will close voting early.  We currently
> >> require 6 votes for majority as our core team consists of 11 people.
> >>
> >> Regards,
> >> -steve
> >>
> >>
> >> [1]
> >>
> >>
> http://eavesdrop.openstack.org/irclogs/%23kolla/%23kolla.2016-03-04.log.h
> >>tml#t2016-03-04T18:23:26
> >>
> >> Warning [1] was a pretty heated argument and there may have been
> >> some swearing :)
> >>
> >> voting.
> >>
> >> "Should we 

Re: [openstack-dev] [kolla][vote] Proposing Alicja Kwasniewska for core reviewer

2016-03-07 Thread Martin André
+1

On Tue, Mar 8, 2016 at 12:48 AM, Sam Yaple  wrote:

> +1 Keep up the great reviews and patches!
>
> Sam Yaple
>
> On Mon, Mar 7, 2016 at 3:41 PM, Jeff Peeler  wrote:
>
>> +1
>>
>> On Mon, Mar 7, 2016 at 3:57 AM, Michal Rostecki 
>> wrote:
>> > +1
>> >
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?

2016-03-07 Thread Vijay Venkatachalam
Hi German,

>> So if you have some 3rd party hardware you only need to change the database 
>> (your steps 1-5) since the 3rd party hardware will just keep load balancing…

This is not the case with NetScaler it has to go through a Delete of V1 
followed by Create in V2 if a smooth migration is required. 

Thanks,
Vijay V.
-Original Message-
From: Eichberger, German [mailto:german.eichber...@hpe.com] 
Sent: 08 March 2016 00:00
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?

Hi Sam,

So if you have some 3rd party hardware you only need to change the database 
(your steps 1-5) since the 3rd party hardware will just keep load balancing…

Now for Kevin’s case with the namespace driver:
You would need a 6th step to reschedule the loadbalancers with the V2 namespace 
driver — which can be done.

If we want to migrate to Octavia or (from one LB provider to another) it might 
be better to use the following steps:

1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools, Health 
Monitors , Members) into some JSON format file(s) 2. Delete LBaaS v1 3. 
Uninstall LBaaS v1 4. Install LBaaS v2 5. Transform the JSON format file into 
some scripts which recreate the load balancers with your provider of choice — 

6. Run those scripts

The problem I see is that we will probably end up with different VIPs so the 
end user would need to change their IPs… 

Thanks,
German



On 3/6/16, 5:35 AM, "Samuel Bercovici"  wrote:

>As for a migration tool.
>Due to model changes and deployment changes between LBaaS v1 and LBaaS v2, I 
>am in favor for the following process:
>
>1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools, Health 
>Monitors , Members) into some JSON format file(s) 2. Delete LBaaS v1 3. 
>Uninstall LBaaS v1 4. Install LBaaS v2 5. Import the data from 1 back 
>over LBaaS v2 (need to allow moving from falvor1-->flavor2, need to 
>make room to some custom modification for mapping between v1 and v2 
>models)
>
>What do you think?
>
>-Sam.
>
>
>
>
>-Original Message-
>From: Fox, Kevin M [mailto:kevin@pnnl.gov]
>Sent: Friday, March 04, 2016 2:06 AM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?
>
>Ok. Thanks for the info.
>
>Kevin
>
>From: Brandon Logan [brandon.lo...@rackspace.com]
>Sent: Thursday, March 03, 2016 2:42 PM
>To: openstack-dev@lists.openstack.org
>Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?
>
>Just for clarity, V2 did not reuse tables, all the tables it uses are only for 
>it.  The main problem is that v1 and v2 both have a pools resource, but v1 and 
>v2's pool resource have different attributes.  With the way neutron wsgi 
>works, if both v1 and v2 are enabled, it will combine both sets of attributes 
>into the same validation schema.
>
>The other problem with v1 and v2 running together was only occurring when the 
>v1 agent driver and v2 agent driver were both in use at the same time.  This 
>may actually have been fixed with some agent updates in neutron, since that is 
>common code.  It needs to be tested out though.
>
>Thanks,
>Brandon
>
>On Thu, 2016-03-03 at 22:14 +, Fox, Kevin M wrote:
>> Just because you had thought no one was using it outside of a PoC doesn't 
>> mean folks aren''t using it in production.
>>
>> We would be happy to migrate to Octavia. We were planning on doing just that 
>> by running both v1 with haproxy namespace, and v2 with Octavia and then pick 
>> off upgrading lb's one at a time, but the reuse of the v1 tables really was 
>> an unfortunate decision that blocked that activity.
>>
>> We're still trying to figure out a path forward.
>>
>> We have an outage window next month. after that, it could be about 6 
>> months before we could try a migration due to production load picking 
>> up for a while. I may just have to burn out all the lb's switch to 
>> v2, then rebuild them by hand in a marathon outage :/
>>
>> And then there's this thingy that also critically needs fixing:
>> https://bugs.launchpad.net/neutron/+bug/1457556
>>
>> Thanks,
>> Kevin
>> 
>> From: Eichberger, German [german.eichber...@hpe.com]
>> Sent: Thursday, March 03, 2016 12:47 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?
>>
>> Kevin,
>>
>>  If we are offering a migration tool it would be namespace -> 
>> namespace (or maybe Octavia since [1]) - given the limitations nobody 
>> should be using the namespace driver outside a PoC so I am a bit 
>> confused why customers can't self migrate. With 3rd party Lbs I would 
>> assume vendors proving those scripts to make sure their particular 
>> 

Re: [openstack-dev] [openstack][keystone] What is the difference between auth_url and auth_uri?

2016-03-07 Thread Jamie Lennox
This is an unfortunate naming scheme that is a long story. Simple version
is:

auth_url is what the auth plugin is using, so where the process will
authenticate to before it authenticates tokens, probably an internal url.
auth_uri is what ends up in the WWW-Authenticate: keystone-uri= header and
so should be the unversioned public endpoint.

Sorry for the confusion.

On 29 February 2016 at 22:41, Qiao, Liyong  wrote:

> Uri and url are different but sometime they might be same.
>
>
>
> Well ,you can see it from http://www.ietf.org/rfc/rfc3986.txt
>
> BR, Eli(Li Yong)Qiao
>
>
>
> *From:* 王华 [mailto:wanghua.hum...@gmail.com]
> *Sent:* Monday, February 29, 2016 7:04 PM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* [openstack-dev] [openstack][keystone] What is the difference
> between auth_url and auth_uri?
>
>
>
> Hi all,
>
>
>
> There are two config parameters (auth_uri and auth_url) in
> keystone_authtoken group. I want to know what is the difference between
> them. Can I use only one of them?
>
>
>
>
>
> Best Regards,
>
> Wanghua
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova does not add ports to br-int after a neutron-ovs-cleanup

2016-03-07 Thread Sergio Morales Acuña
Hi.

After running a  neutron-ovs-cleanup and restarting openvswitch-agent and
 nova-compute with all the VMs running,  nova does not add existing ports
to br-int.

All qvb@qvo exists but, after restarting nova-compute, only qvo from new
machines are added correctly to the br-int.

If I reboot the physical node all the qvo's are added to br-int without
 problem.

Is there any way to force this procedure (add qvos to br-int port) manually?

P.D.: No errors detected in the logs.
P.D.: I'm using Liberty from RDO.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ceilometer] unable to see compute.node.cpu.* metrics in ceilometer

2016-03-07 Thread Kapil
I am using Kilo 2015.1.1 version

notification_driver=ceilometer.compute.nova_notifier
notification_driver=nova.openstack.common.notifier.rpc_notifier


I tried the following-

1. printed out event_type in
https://github.com/openstack/ceilometer/blob/stable/kilo/ceilometer/agent/plugin_base.py#L151
However, I am not receiving compute.metrics.update events

2. printed out metrics in
https://github.com/openstack/nova/blob/stable/kilo/nova/compute/resource_tracker.py#L364
I can see that metrics are being generated. I am not sure if they're being
pushed to the notification queue.


ᐧ

Regards,
Kapil Agarwal

On Mon, Mar 7, 2016 at 1:15 PM, Kapil  wrote:

> Hi
>
> I enabled ComputeDriverCPUMonitor in nova.conf on one of the compute
> nodes, restarted nova-compute, ceilometer-agent-compute on the compute node
> and ceilometer-collector, ceilometer-api, ceilometer-agent-central,
> ceilometer-agent-notification on the controller node.
> However, I cannot see the cpu meters or samples in ceilometer.
>
> Any suggestions to what may be the issue ?
>
> Thanks
> Kapil
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?

2016-03-07 Thread Eichberger, German
Hi Kevin,

Floating IP will work in V2 as well :-)

Thanks,
German




On 3/7/16, 2:47 PM, "Fox, Kevin M"  wrote:

>You can attach a floating ip onto a vip. Thats what we've done to work around 
>the issue. Well, at least with v1. never tried it with v2.
>
>Thanks,
>Kevin
>
>From: Samuel Bercovici [samu...@radware.com]
>Sent: Monday, March 07, 2016 11:00 AM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?
>
>As far as I recall, you can specify the VIP in creating the LB so you will end 
>up with same IPs.
>
>-Original Message-
>From: Eichberger, German [mailto:german.eichber...@hpe.com]
>Sent: Monday, March 07, 2016 8:30 PM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?
>
>Hi Sam,
>
>So if you have some 3rd party hardware you only need to change the database 
>(your steps 1-5) since the 3rd party hardware will just keep load balancing…
>
>Now for Kevin’s case with the namespace driver:
>You would need a 6th step to reschedule the loadbalancers with the V2 
>namespace driver — which can be done.
>
>If we want to migrate to Octavia or (from one LB provider to another) it might 
>be better to use the following steps:
>
>1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools, Health 
>Monitors , Members) into some JSON format file(s) 2. Delete LBaaS v1 3. 
>Uninstall LBaaS v1 4. Install LBaaS v2 5. Transform the JSON format file into 
>some scripts which recreate the load balancers with your provider of choice —
>
>6. Run those scripts
>
>The problem I see is that we will probably end up with different VIPs so the 
>end user would need to change their IPs…
>
>Thanks,
>German
>
>
>
>On 3/6/16, 5:35 AM, "Samuel Bercovici"  wrote:
>
>>As for a migration tool.
>>Due to model changes and deployment changes between LBaaS v1 and LBaaS v2, I 
>>am in favor for the following process:
>>
>>1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools, Health
>>Monitors , Members) into some JSON format file(s) 2. Delete LBaaS v1 3.
>>Uninstall LBaaS v1 4. Install LBaaS v2 5. Import the data from 1 back
>>over LBaaS v2 (need to allow moving from falvor1-->flavor2, need to
>>make room to some custom modification for mapping between v1 and v2
>>models)
>>
>>What do you think?
>>
>>-Sam.
>>
>>
>>
>>
>>-Original Message-
>>From: Fox, Kevin M [mailto:kevin@pnnl.gov]
>>Sent: Friday, March 04, 2016 2:06 AM
>>To: OpenStack Development Mailing List (not for usage questions)
>>Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?
>>
>>Ok. Thanks for the info.
>>
>>Kevin
>>
>>From: Brandon Logan [brandon.lo...@rackspace.com]
>>Sent: Thursday, March 03, 2016 2:42 PM
>>To: openstack-dev@lists.openstack.org
>>Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?
>>
>>Just for clarity, V2 did not reuse tables, all the tables it uses are only 
>>for it.  The main problem is that v1 and v2 both have a pools resource, but 
>>v1 and v2's pool resource have different attributes.  With the way neutron 
>>wsgi works, if both v1 and v2 are enabled, it will combine both sets of 
>>attributes into the same validation schema.
>>
>>The other problem with v1 and v2 running together was only occurring when the 
>>v1 agent driver and v2 agent driver were both in use at the same time.  This 
>>may actually have been fixed with some agent updates in neutron, since that 
>>is common code.  It needs to be tested out though.
>>
>>Thanks,
>>Brandon
>>
>>On Thu, 2016-03-03 at 22:14 +, Fox, Kevin M wrote:
>>> Just because you had thought no one was using it outside of a PoC doesn't 
>>> mean folks aren''t using it in production.
>>>
>>> We would be happy to migrate to Octavia. We were planning on doing just 
>>> that by running both v1 with haproxy namespace, and v2 with Octavia and 
>>> then pick off upgrading lb's one at a time, but the reuse of the v1 tables 
>>> really was an unfortunate decision that blocked that activity.
>>>
>>> We're still trying to figure out a path forward.
>>>
>>> We have an outage window next month. after that, it could be about 6
>>> months before we could try a migration due to production load picking
>>> up for a while. I may just have to burn out all the lb's switch to
>>> v2, then rebuild them by hand in a marathon outage :/
>>>
>>> And then there's this thingy that also critically needs fixing:
>>> https://bugs.launchpad.net/neutron/+bug/1457556
>>>
>>> Thanks,
>>> Kevin
>>> 
>>> From: Eichberger, German [german.eichber...@hpe.com]
>>> Sent: Thursday, March 03, 2016 12:47 PM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> Subject: Re: [openstack-dev] 

[openstack-dev] [Infra] Meeting Tuesday March 8th at 19:00 UTC

2016-03-07 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is having our next weekly
meeting on Tuesday March 8th, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting

Anyone is welcome to to add agenda items and everyone interested in
the project infrastructure and process surrounding automated testing
and deployment is encouraged to attend.

In case you missed it or would like a refresher, the meeting minutes
and full logs from our last meeting are available:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-03-01-19.15.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-03-01-19.15.log.html
Log: 
http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-02-16-19.03.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs] Fwd: [Neutron][LBaaS][Octavia][Docs] Need experienced contributor documentation best-practices and how-tos

2016-03-07 Thread Lana Brindley
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 08/03/16 02:09, Hayes, Graham wrote:
> On 07/03/2016 16:00, Matt Kassawara wrote:
>> Snipping a lot, because I'm particularly interested in one comment...
>>
>>
>> I agree that many developers (especially developers of those groups
>> new to the big tent) seem to think that they need to be on the top
>> level docs.openstack.org , without
>> necessarily understanding that docs.openstack.org/developer
>>  is usually the more
>> appropriate place, at  least to begin with. This should probably be
>> made more explicit both in the Infra Manual, plus anywhere that
>> Foundation is discussing big tent inclusion.
>>
>>
>> We tell operators and users to look at the documentation on
>> docs.openstack.org  because the documentation
>> in /developer is aimed at developers and often lacks polish. Now we're
>> telling developers to put operator/user documentation into /developer?
> 
> To follow up on that - the Foundations "Project Navigator" has one of
> the maturity requirements as "Is there an install guide for this
> project guide (at docs.openstack.org)?"
> 
> If this is required, how can projects get content in to this.
> 
> I went looking about 6 months ago for the information that Stephen
> asked for to update (create) our docs, but couldn't find it.
> 

To try and answer both questions in one reply: The developer documentation 
should live on /developer, with config options automatically picked up for the 
Config Reference where appropriate. If you are new to the big tent, then you 
should also use /developer to create and polish your user documentation. This 
is enough to be considered 'official' according to the Project Navigator. Once 
you have a good amount of quality content, then please feel free to open a 
conversation with docs about inclusion in the top level.

The main reason we do it this way is because writing docs is a very manual, 
labour-intensive task, and the docs team is small. We already have a lot of 
content that we maintain and a lot of people throwing things over the wall to 
us. We simply cannot have every big tent project present in the Install Guide.

Hope that helps clear things up.

Lana

- -- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v2
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCAAGBQJW3gf/AAoJELppzVb4+KUygvgH/3CLosJOZQ4HOGzZ6gyESAip
7gxKSNRaN9LTVXGUFMpSwppDMPguM2X78bpQB1cSa6A20mvRHbHKVFtio/virn1l
05R0iWRR5I157ggfVFA8P+tLeVONVTQi0Sa9W/L6GU/Ihr7mplVptYz5tpipmJy9
RYwa3LlOCF1qMQogOdNcv+5Tg2ci6Sqn03xw43jN18iC2dAtJVZJxmjO760mJ9h9
9uLDpb8GOvrCNvM4hdiWoZlEIbYpViaZGcYqQElml1RqZOmzswC1GOrquGIENkTl
eClj7UFUUji0/6WqeoDjSq/60NzT/i+IYvBd0bFDPgmn2kZLt67xCgoyfrj1Cik=
=AfBL
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?

2016-03-07 Thread Fox, Kevin M
Awesome. Thanks. :)

For the record, most of our lb's were created with heat templates. So as of 
now, deleting them and recreating them will work, I think, but may cause some 
issues if we ever turn on convergence, which is a nice feature

Thanks,
Kevin

From: Eichberger, German [german.eichber...@hpe.com]
Sent: Monday, March 07, 2016 2:27 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?

Ok, for what it’s worth we have contributed our migration script: 
https://review.openstack.org/#/c/289595/ — please look at this as a starting 
point and feel free to fix potential problems…

Thanks,
German




On 3/7/16, 11:00 AM, "Samuel Bercovici"  wrote:

>As far as I recall, you can specify the VIP in creating the LB so you will end 
>up with same IPs.
>
>-Original Message-
>From: Eichberger, German [mailto:german.eichber...@hpe.com]
>Sent: Monday, March 07, 2016 8:30 PM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?
>
>Hi Sam,
>
>So if you have some 3rd party hardware you only need to change the database 
>(your steps 1-5) since the 3rd party hardware will just keep load balancing…
>
>Now for Kevin’s case with the namespace driver:
>You would need a 6th step to reschedule the loadbalancers with the V2 
>namespace driver — which can be done.
>
>If we want to migrate to Octavia or (from one LB provider to another) it might 
>be better to use the following steps:
>
>1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools, Health 
>Monitors , Members) into some JSON format file(s) 2. Delete LBaaS v1 3. 
>Uninstall LBaaS v1 4. Install LBaaS v2 5. Transform the JSON format file into 
>some scripts which recreate the load balancers with your provider of choice —
>
>6. Run those scripts
>
>The problem I see is that we will probably end up with different VIPs so the 
>end user would need to change their IPs…
>
>Thanks,
>German
>
>
>
>On 3/6/16, 5:35 AM, "Samuel Bercovici"  wrote:
>
>>As for a migration tool.
>>Due to model changes and deployment changes between LBaaS v1 and LBaaS v2, I 
>>am in favor for the following process:
>>
>>1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools, Health
>>Monitors , Members) into some JSON format file(s) 2. Delete LBaaS v1 3.
>>Uninstall LBaaS v1 4. Install LBaaS v2 5. Import the data from 1 back
>>over LBaaS v2 (need to allow moving from falvor1-->flavor2, need to
>>make room to some custom modification for mapping between v1 and v2
>>models)
>>
>>What do you think?
>>
>>-Sam.
>>
>>
>>
>>
>>-Original Message-
>>From: Fox, Kevin M [mailto:kevin@pnnl.gov]
>>Sent: Friday, March 04, 2016 2:06 AM
>>To: OpenStack Development Mailing List (not for usage questions)
>>Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?
>>
>>Ok. Thanks for the info.
>>
>>Kevin
>>
>>From: Brandon Logan [brandon.lo...@rackspace.com]
>>Sent: Thursday, March 03, 2016 2:42 PM
>>To: openstack-dev@lists.openstack.org
>>Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?
>>
>>Just for clarity, V2 did not reuse tables, all the tables it uses are only 
>>for it.  The main problem is that v1 and v2 both have a pools resource, but 
>>v1 and v2's pool resource have different attributes.  With the way neutron 
>>wsgi works, if both v1 and v2 are enabled, it will combine both sets of 
>>attributes into the same validation schema.
>>
>>The other problem with v1 and v2 running together was only occurring when the 
>>v1 agent driver and v2 agent driver were both in use at the same time.  This 
>>may actually have been fixed with some agent updates in neutron, since that 
>>is common code.  It needs to be tested out though.
>>
>>Thanks,
>>Brandon
>>
>>On Thu, 2016-03-03 at 22:14 +, Fox, Kevin M wrote:
>>> Just because you had thought no one was using it outside of a PoC doesn't 
>>> mean folks aren''t using it in production.
>>>
>>> We would be happy to migrate to Octavia. We were planning on doing just 
>>> that by running both v1 with haproxy namespace, and v2 with Octavia and 
>>> then pick off upgrading lb's one at a time, but the reuse of the v1 tables 
>>> really was an unfortunate decision that blocked that activity.
>>>
>>> We're still trying to figure out a path forward.
>>>
>>> We have an outage window next month. after that, it could be about 6
>>> months before we could try a migration due to production load picking
>>> up for a while. I may just have to burn out all the lb's switch to
>>> v2, then rebuild them by hand in a marathon outage :/
>>>
>>> And then there's this thingy that also critically needs fixing:
>>> https://bugs.launchpad.net/neutron/+bug/1457556
>>>
>>> Thanks,
>>> Kevin
>>> 

Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?

2016-03-07 Thread Fox, Kevin M
You can attach a floating ip onto a vip. Thats what we've done to work around 
the issue. Well, at least with v1. never tried it with v2.

Thanks,
Kevin

From: Samuel Bercovici [samu...@radware.com]
Sent: Monday, March 07, 2016 11:00 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?

As far as I recall, you can specify the VIP in creating the LB so you will end 
up with same IPs.

-Original Message-
From: Eichberger, German [mailto:german.eichber...@hpe.com]
Sent: Monday, March 07, 2016 8:30 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?

Hi Sam,

So if you have some 3rd party hardware you only need to change the database 
(your steps 1-5) since the 3rd party hardware will just keep load balancing…

Now for Kevin’s case with the namespace driver:
You would need a 6th step to reschedule the loadbalancers with the V2 namespace 
driver — which can be done.

If we want to migrate to Octavia or (from one LB provider to another) it might 
be better to use the following steps:

1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools, Health 
Monitors , Members) into some JSON format file(s) 2. Delete LBaaS v1 3. 
Uninstall LBaaS v1 4. Install LBaaS v2 5. Transform the JSON format file into 
some scripts which recreate the load balancers with your provider of choice —

6. Run those scripts

The problem I see is that we will probably end up with different VIPs so the 
end user would need to change their IPs…

Thanks,
German



On 3/6/16, 5:35 AM, "Samuel Bercovici"  wrote:

>As for a migration tool.
>Due to model changes and deployment changes between LBaaS v1 and LBaaS v2, I 
>am in favor for the following process:
>
>1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools, Health
>Monitors , Members) into some JSON format file(s) 2. Delete LBaaS v1 3.
>Uninstall LBaaS v1 4. Install LBaaS v2 5. Import the data from 1 back
>over LBaaS v2 (need to allow moving from falvor1-->flavor2, need to
>make room to some custom modification for mapping between v1 and v2
>models)
>
>What do you think?
>
>-Sam.
>
>
>
>
>-Original Message-
>From: Fox, Kevin M [mailto:kevin@pnnl.gov]
>Sent: Friday, March 04, 2016 2:06 AM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?
>
>Ok. Thanks for the info.
>
>Kevin
>
>From: Brandon Logan [brandon.lo...@rackspace.com]
>Sent: Thursday, March 03, 2016 2:42 PM
>To: openstack-dev@lists.openstack.org
>Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?
>
>Just for clarity, V2 did not reuse tables, all the tables it uses are only for 
>it.  The main problem is that v1 and v2 both have a pools resource, but v1 and 
>v2's pool resource have different attributes.  With the way neutron wsgi 
>works, if both v1 and v2 are enabled, it will combine both sets of attributes 
>into the same validation schema.
>
>The other problem with v1 and v2 running together was only occurring when the 
>v1 agent driver and v2 agent driver were both in use at the same time.  This 
>may actually have been fixed with some agent updates in neutron, since that is 
>common code.  It needs to be tested out though.
>
>Thanks,
>Brandon
>
>On Thu, 2016-03-03 at 22:14 +, Fox, Kevin M wrote:
>> Just because you had thought no one was using it outside of a PoC doesn't 
>> mean folks aren''t using it in production.
>>
>> We would be happy to migrate to Octavia. We were planning on doing just that 
>> by running both v1 with haproxy namespace, and v2 with Octavia and then pick 
>> off upgrading lb's one at a time, but the reuse of the v1 tables really was 
>> an unfortunate decision that blocked that activity.
>>
>> We're still trying to figure out a path forward.
>>
>> We have an outage window next month. after that, it could be about 6
>> months before we could try a migration due to production load picking
>> up for a while. I may just have to burn out all the lb's switch to
>> v2, then rebuild them by hand in a marathon outage :/
>>
>> And then there's this thingy that also critically needs fixing:
>> https://bugs.launchpad.net/neutron/+bug/1457556
>>
>> Thanks,
>> Kevin
>> 
>> From: Eichberger, German [german.eichber...@hpe.com]
>> Sent: Thursday, March 03, 2016 12:47 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?
>>
>> Kevin,
>>
>>  If we are offering a migration tool it would be namespace ->
>> namespace (or maybe Octavia since [1]) - given the limitations nobody
>> should be using the namespace driver outside a PoC so I am a bit
>> 

[openstack-dev] [ironic] weekly subteam status report

2016-03-07 Thread Ruby Loo
Hi,

We are itchy to present this week's subteam report for Ironic. As usual,
this is pulled directly from the Ironic whiteboard[0] and formatted.

Bugs (dtantsur)
===
- Stats (diff with 29.02.2016):
- Ironic: 157 bugs (-5) + 175 wishlist items (-3). 16 new, 115 in progress
(-8), 0 critical, 22 high and 12 incomplete
- Inspector: 9 bugs (-1) + 15 wishlist items. 0 new, 6 in progress, 0
critical, 2 high and 0 incomplete
- Nova bugs with Ironic tag: 16. 0 new, 0 critical, 0 high
- monthly diff (with 01.02.2016):
- Ironic: 157 bugs (+6) + 175 wishlist items (+4), 115 in progress (+4)
- Inspector: 9 bugs (-5) + 15 wishlist items (-2), 6 in progress (-4)

Network isolation (Neutron/Ironic work) (jroll)
===
- API changes blocked pending driver-loading refactoring here:
https://review.openstack.org/#/c/285852/.
- Devananda is going to take over starting Monday
- Woud like to get this in for a release later this week.
- LAG support in Nova not merged yet, unlikely to land in Mitaka:
https://review.openstack.org/#/c/206163
- client patches did not make M3 milestone (Mitaka client freeze):
https://review.openstack.org/#/c/206144

Manual cleaning (rloo)
==
- Everything is done except for GET clean steps API, which is deferred to
Newton.
- Removing this subteam item.

RAID (lucasagomes)
==
- RAID CLI is now merged https://review.openstack.org/#/c/226234/
- RAID documentation still being reviewed
https://review.openstack.org/#/c/226330/

Parallel tasks with futurist (dtantsur)
===
- This was done for ironic and inspector!
- Removing this subteam item.

Node filter API and claims endpoint (jroll, devananda, lucasagomes)
===
- no update; deprioritized in favor of neutron work, manual cleaning

Nova Liaisons (jlvillal & mrda)
===
- No status update

Testing/Quality (jlvillal/krtaylor)
===
- Grenade work continuing. jlvillal/mgould are attempting to root cause
current issue

Inspector (dtansur)
===
- dnsmasq dhcp ip address hash collisions resolved by a config file update
(dhcp-sequential-ip)
- I'd like to make a release as soon as the keystoneauth patch merges

.

Until next week,
--ruby

[0] https://etherpad.openstack.org/p/IronicWhiteBoard
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?

2016-03-07 Thread Eichberger, German
Ok, for what it’s worth we have contributed our migration script: 
https://review.openstack.org/#/c/289595/ — please look at this as a starting 
point and feel free to fix potential problems…

Thanks,
German




On 3/7/16, 11:00 AM, "Samuel Bercovici"  wrote:

>As far as I recall, you can specify the VIP in creating the LB so you will end 
>up with same IPs.
>
>-Original Message-
>From: Eichberger, German [mailto:german.eichber...@hpe.com] 
>Sent: Monday, March 07, 2016 8:30 PM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?
>
>Hi Sam,
>
>So if you have some 3rd party hardware you only need to change the database 
>(your steps 1-5) since the 3rd party hardware will just keep load balancing…
>
>Now for Kevin’s case with the namespace driver:
>You would need a 6th step to reschedule the loadbalancers with the V2 
>namespace driver — which can be done.
>
>If we want to migrate to Octavia or (from one LB provider to another) it might 
>be better to use the following steps:
>
>1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools, Health 
>Monitors , Members) into some JSON format file(s) 2. Delete LBaaS v1 3. 
>Uninstall LBaaS v1 4. Install LBaaS v2 5. Transform the JSON format file into 
>some scripts which recreate the load balancers with your provider of choice — 
>
>6. Run those scripts
>
>The problem I see is that we will probably end up with different VIPs so the 
>end user would need to change their IPs… 
>
>Thanks,
>German
>
>
>
>On 3/6/16, 5:35 AM, "Samuel Bercovici"  wrote:
>
>>As for a migration tool.
>>Due to model changes and deployment changes between LBaaS v1 and LBaaS v2, I 
>>am in favor for the following process:
>>
>>1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools, Health 
>>Monitors , Members) into some JSON format file(s) 2. Delete LBaaS v1 3. 
>>Uninstall LBaaS v1 4. Install LBaaS v2 5. Import the data from 1 back 
>>over LBaaS v2 (need to allow moving from falvor1-->flavor2, need to 
>>make room to some custom modification for mapping between v1 and v2 
>>models)
>>
>>What do you think?
>>
>>-Sam.
>>
>>
>>
>>
>>-Original Message-
>>From: Fox, Kevin M [mailto:kevin@pnnl.gov]
>>Sent: Friday, March 04, 2016 2:06 AM
>>To: OpenStack Development Mailing List (not for usage questions)
>>Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?
>>
>>Ok. Thanks for the info.
>>
>>Kevin
>>
>>From: Brandon Logan [brandon.lo...@rackspace.com]
>>Sent: Thursday, March 03, 2016 2:42 PM
>>To: openstack-dev@lists.openstack.org
>>Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?
>>
>>Just for clarity, V2 did not reuse tables, all the tables it uses are only 
>>for it.  The main problem is that v1 and v2 both have a pools resource, but 
>>v1 and v2's pool resource have different attributes.  With the way neutron 
>>wsgi works, if both v1 and v2 are enabled, it will combine both sets of 
>>attributes into the same validation schema.
>>
>>The other problem with v1 and v2 running together was only occurring when the 
>>v1 agent driver and v2 agent driver were both in use at the same time.  This 
>>may actually have been fixed with some agent updates in neutron, since that 
>>is common code.  It needs to be tested out though.
>>
>>Thanks,
>>Brandon
>>
>>On Thu, 2016-03-03 at 22:14 +, Fox, Kevin M wrote:
>>> Just because you had thought no one was using it outside of a PoC doesn't 
>>> mean folks aren''t using it in production.
>>>
>>> We would be happy to migrate to Octavia. We were planning on doing just 
>>> that by running both v1 with haproxy namespace, and v2 with Octavia and 
>>> then pick off upgrading lb's one at a time, but the reuse of the v1 tables 
>>> really was an unfortunate decision that blocked that activity.
>>>
>>> We're still trying to figure out a path forward.
>>>
>>> We have an outage window next month. after that, it could be about 6 
>>> months before we could try a migration due to production load picking 
>>> up for a while. I may just have to burn out all the lb's switch to 
>>> v2, then rebuild them by hand in a marathon outage :/
>>>
>>> And then there's this thingy that also critically needs fixing:
>>> https://bugs.launchpad.net/neutron/+bug/1457556
>>>
>>> Thanks,
>>> Kevin
>>> 
>>> From: Eichberger, German [german.eichber...@hpe.com]
>>> Sent: Thursday, March 03, 2016 12:47 PM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are 
>>> weready?
>>>
>>> Kevin,
>>>
>>>  If we are offering a migration tool it would be namespace -> 
>>> namespace (or maybe Octavia since [1]) - given the limitations nobody 
>>> should be using the namespace driver outside a PoC so I am a bit 
>>> 

[openstack-dev] [murano] Cancelling Team Meeting

2016-03-07 Thread Serg Melikyan
Meeting tomorrow, March 7, is canceled since a number of team members
are on holidays.

-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Smaug]- IRC Meeting tomorrow (03/08) - 1400 UTC

2016-03-07 Thread Eran Gampel
Hi All,

We will hold our bi-weekly IRC meeting tomorrow (Tuesday, 03/08) at 1400
UTC in #openstack-meeting

Please review the proposed meeting agenda here:
https://wiki.openstack.org/wiki/Meetings/smaug

Please feel free to add to the agenda any subject you would like to discuss.

Thanks,
Eran
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] [FFE] Support TOSCA definitions for applications

2016-03-07 Thread Serg Melikyan
I have no objection regarding granting this FFE.

This feature is developed as a plugin for Murano, placed in contrib
folder and is not going to affect our main codebase in any way, so I
am pretty sure that it's safe to grant this FFE.

If there are not objections, I propose to consider FFE granted.

On Thu, Mar 3, 2016 at 9:46 AM, Tetiana Lashchova
 wrote:
> Hi all,
>
> I would like to request a feature freeze exception for "Support TOSCA
> definitions for applications" [1].
> The spec is already merged [2], patch is on review [3] and the task is
> almost finished.
> I am looking forward for your decision about considering this change for a
> FFE.
>
> [1] https://blueprints.launchpad.net/murano/+spec/support-tosca-format
> [2] https://review.openstack.org/#/c/194422/
> [3] https://review.openstack.org/#/c/243872/
>
> Thanks,
> Tetiana
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Clarifications about ThirdParty CI deadlines

2016-03-07 Thread Thiago Paiva
Hello folks, 

My project is in need of some clarifications about the deadlines for the CI 
deployment on [1]. If you can kindly answer these questions to help us consider 
changes on our sprint planning to address the community requirements, would be 
very very helpful: 

1) In the point 2, what do we mean by "receive events"? Is is about reading 
from the event stream of Gerrit and take the appropriate actions? Act upon 
"check experimental" or "check " comments are considered valid to 
fulfill this requirement? 

2) We are a little confused with the phrase "post comments in the sandbox". By 
that we mean commenting on the "openstack-infra/ci-sandbox" project? Do we need 
to keep commenting on sandbox even when we have already set-up the job to read 
events and comment results for the "openstack/ironic" project? 


Thank you, 


[1] https://wiki.openstack.org/wiki/Ironic/Testing#Third_Party_CI_Requirements 

Thiago Paiva Brito 
Lead Software Engineer 
OneView Drivers for Openstack Ironic 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Openstack] Instalation Problem:VBoxManage: error: Guest not running [ubuntu14.04]

2016-03-07 Thread Maksim Malchuk
Guys,

Samer has shown error messages... two lines:

*VM "fuel-master" has been successfully started.*
*VBoxManage: error: Guest not running*

which exactly means that first command "VBoxManage startvm" has
successfully executed but next one "VBoxManage " failed due to error.
Which means the problem with the VM itself. Need to execute Manager (GUI),
start VM manually and look at the error.


On Mon, Mar 7, 2016 at 6:48 PM, Aleksey Zvyagintsev <
azvyagint...@mirantis.com> wrote:

> Hello,
> that definitely about HDD. Create disk with at least 50Gb +
>
> On Mon, Mar 7, 2016 at 5:32 PM, Samer Machara <
> samer.mach...@telecom-sudparis.eu> wrote:
>
>> Hello,
>>YES, I just find the solution, the Virtualization Option on the BIOS
>> was Disable by default, I turn it on and It's working now.
>> Now my problem are the resources, But  that's another story. JEJEJE
>>
>> I'm not sure if I need RAM or HD.
>>
>>
>>
>> Thanks for your help.
>>
>> --
>> *De: *"Igor Marnat" 
>> *À: *"Maksim Malchuk" 
>> *Cc: *"Samer Machara" , "OpenStack
>> Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org>
>> *Envoyé: *Lundi 7 Mars 2016 11:21:42
>>
>> *Objet: *Re: [openstack-dev] [Fuel] [Openstack] Instalation
>> Problem:VBoxManage: error: Guest not running [ubuntu14.04]
>>
>> Samer,
>> did you make any progress?
>>
>> If not yet, I have couple of questions:
>> - Did you download MOS image and VBox scripts from
>> https://docs.fuel-infra.org/openstack/fuel/fuel-8.0/quickstart-guide.html#downloading-the-mirantis-openstack-image
>> ?
>>
>> - Can you login to your just deployed master node?
>>
>> If you can, could you please send last 50-70 strings of the file
>> /var/log/puppet/bootstrap_admin_node.log from your master node?
>>
>> Regards,
>> Igor Marnat
>>
>> On Fri, Mar 4, 2016 at 11:20 PM, Maksim Malchuk 
>> wrote:
>>
>>> Samer, please address my recommendations.
>>>
>>>
>>> On Fri, Mar 4, 2016 at 7:49 PM, Samer Machara <
>>> samer.mach...@telecom-sudparis.eu> wrote:
>>>
 Hi, Igor
   Thanks for answer so quickly.

 I wait until the following message appears
 Installation timed out! (3000 seconds)
 I don't have any virtual machines created.

 I update to 5.0 VirtualBox version, Now I got the following message

 VBoxManage: error: Machine 'fuel-master' is not currently running
  Waiting for product VM to download files. Please do NOT abort the
 script...

 I'm still waiting

 --
 *De: *"Maksim Malchuk" 
 *À: *"OpenStack Development Mailing List (not for usage questions)" <
 openstack-dev@lists.openstack.org>
 *Envoyé: *Vendredi 4 Mars 2016 15:19:54
 *Objet: *Re: [openstack-dev] [Fuel] [Openstack] Instalation
 Problem:VBoxManage: error: Guest not running [ubuntu14.04]


 Igor,

 Some information about my system:
 OS: ubuntu 14.04 LTS
 Memory: 3,8GiB

 Samer can't run many guests I think.


 On Fri, Mar 4, 2016 at 5:12 PM, Igor Marnat 
 wrote:

> Samer, Maksim,
> I'd rather say that script started fuel-master already (VM
> "fuel-master" has been successfully started.), didn't find running guests,
> (VBoxManage: error: Guest not running) but it can try to start them
> afterwards.
>
> Samer,
> - how many VMs are there running besides fuel-master?
> - is it still showing "Waiting for product VM to download files.
> Please do NOT abort the script..." ?
> - for how long did you wait since the message above?
>
>
> Regards,
> Igor Marnat
>
> On Fri, Mar 4, 2016 at 5:04 PM, Maksim Malchuk 
> wrote:
>
>> Hi Sames,
>>
>> *VBoxManage: error: Guest not running*
>>
>> looks line the problem with VirtualBox itself or settings for the
>> 'fuel-master' VM, it can't boot it.
>> Open the VirtualBox Manger (GUI), select the 'fuel-master' VM and
>> start it manually - it should show you what is exactly happens.
>>
>>
>> On Fri, Mar 4, 2016 at 4:41 PM, Samer Machara <
>> samer.mach...@telecom-sudparis.eu> wrote:
>>
>>> Hello, everyone.
>>> I'm new with Fuel. I'm trying to follow the QuickStart Guide (
>>> https://docs.fuel-infra.org/openstack/fuel/fuel-8.0/quickstart-guide.html),
>>> but I have the following Error:
>>>
>>>
>>> *Waiting for VM "fuel-master" to power on...*
>>> *VM "fuel-master" has been successfully started.*
>>> *VBoxManage: error: Guest not running*
>>> *VBoxManage: error: Guest not running*
>>> ...
>>> *VBoxManage: error: Guest not running*
>>> *Waiting for product VM to download files. Please do NOT abort the
>>> script...*
>>>
>>>

Re: [openstack-dev] [nova] Do we need a microversion to return a 501 from GET os-virtual-interfaces?

2016-03-07 Thread Andrey Kurilin
Hi!
I did not find any usage "virtual_interfaces" with try...except block, so
we can easily change error code without new microversion.

I'm +1 for proper code.

On Mon, Mar 7, 2016 at 6:32 PM, Ken'ichi Ohmichi 
wrote:

> 2016-03-06 15:41 GMT-08:00 Jay Pipes :
> > On 03/06/2016 02:21 PM, Matt Riedemann wrote:
> >>
> >> Thanks, good point. I think 400 would be best here. And according to our
> >> handy-dandy docs [1] it should be OK to do this without a microversion.
> >
> >
> > Yup, 400 is correct and yes, you should not need a new microversion
> addition
> > to the API.
>
> +1 for unnecessary microversion bumping.
>
> Thanks
> Ken Ohmichi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best regards,
Andrey Kurilin.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?

2016-03-07 Thread Samuel Bercovici
As far as I recall, you can specify the VIP in creating the LB so you will end 
up with same IPs.

-Original Message-
From: Eichberger, German [mailto:german.eichber...@hpe.com] 
Sent: Monday, March 07, 2016 8:30 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?

Hi Sam,

So if you have some 3rd party hardware you only need to change the database 
(your steps 1-5) since the 3rd party hardware will just keep load balancing…

Now for Kevin’s case with the namespace driver:
You would need a 6th step to reschedule the loadbalancers with the V2 namespace 
driver — which can be done.

If we want to migrate to Octavia or (from one LB provider to another) it might 
be better to use the following steps:

1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools, Health 
Monitors , Members) into some JSON format file(s) 2. Delete LBaaS v1 3. 
Uninstall LBaaS v1 4. Install LBaaS v2 5. Transform the JSON format file into 
some scripts which recreate the load balancers with your provider of choice — 

6. Run those scripts

The problem I see is that we will probably end up with different VIPs so the 
end user would need to change their IPs… 

Thanks,
German



On 3/6/16, 5:35 AM, "Samuel Bercovici"  wrote:

>As for a migration tool.
>Due to model changes and deployment changes between LBaaS v1 and LBaaS v2, I 
>am in favor for the following process:
>
>1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools, Health 
>Monitors , Members) into some JSON format file(s) 2. Delete LBaaS v1 3. 
>Uninstall LBaaS v1 4. Install LBaaS v2 5. Import the data from 1 back 
>over LBaaS v2 (need to allow moving from falvor1-->flavor2, need to 
>make room to some custom modification for mapping between v1 and v2 
>models)
>
>What do you think?
>
>-Sam.
>
>
>
>
>-Original Message-
>From: Fox, Kevin M [mailto:kevin@pnnl.gov]
>Sent: Friday, March 04, 2016 2:06 AM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?
>
>Ok. Thanks for the info.
>
>Kevin
>
>From: Brandon Logan [brandon.lo...@rackspace.com]
>Sent: Thursday, March 03, 2016 2:42 PM
>To: openstack-dev@lists.openstack.org
>Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?
>
>Just for clarity, V2 did not reuse tables, all the tables it uses are only for 
>it.  The main problem is that v1 and v2 both have a pools resource, but v1 and 
>v2's pool resource have different attributes.  With the way neutron wsgi 
>works, if both v1 and v2 are enabled, it will combine both sets of attributes 
>into the same validation schema.
>
>The other problem with v1 and v2 running together was only occurring when the 
>v1 agent driver and v2 agent driver were both in use at the same time.  This 
>may actually have been fixed with some agent updates in neutron, since that is 
>common code.  It needs to be tested out though.
>
>Thanks,
>Brandon
>
>On Thu, 2016-03-03 at 22:14 +, Fox, Kevin M wrote:
>> Just because you had thought no one was using it outside of a PoC doesn't 
>> mean folks aren''t using it in production.
>>
>> We would be happy to migrate to Octavia. We were planning on doing just that 
>> by running both v1 with haproxy namespace, and v2 with Octavia and then pick 
>> off upgrading lb's one at a time, but the reuse of the v1 tables really was 
>> an unfortunate decision that blocked that activity.
>>
>> We're still trying to figure out a path forward.
>>
>> We have an outage window next month. after that, it could be about 6 
>> months before we could try a migration due to production load picking 
>> up for a while. I may just have to burn out all the lb's switch to 
>> v2, then rebuild them by hand in a marathon outage :/
>>
>> And then there's this thingy that also critically needs fixing:
>> https://bugs.launchpad.net/neutron/+bug/1457556
>>
>> Thanks,
>> Kevin
>> 
>> From: Eichberger, German [german.eichber...@hpe.com]
>> Sent: Thursday, March 03, 2016 12:47 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?
>>
>> Kevin,
>>
>>  If we are offering a migration tool it would be namespace -> 
>> namespace (or maybe Octavia since [1]) - given the limitations nobody 
>> should be using the namespace driver outside a PoC so I am a bit 
>> confused why customers can't self migrate. With 3rd party Lbs I would 
>> assume vendors proving those scripts to make sure their particular 
>> hardware works with those. If you indeed need a migration from LBaaS
>> V1 namespace -> LBaaS V2 namespace/Octavia please file an RfE with 
>> your use case so we can discuss it further...
>>
>> Thanks,
>> German
>>
>> [1] https://review.openstack.org/#/c/286380

Re: [openstack-dev] [puppet] proposal to create puppet-neutron-core and add Sergey Kolekonov

2016-03-07 Thread Sergey Kolekonov
Thanks everyone! I will do my best to further improve puppet-neutron and
other modules

On Mon, Mar 7, 2016 at 6:23 PM, Emilien Macchi  wrote:

> I've proceeded to the change.
> We created puppet-neutron core and added Sergey Kolekonov part of this
> group.
>
> Congrats Sergey!
>
> On 03/04/2016 10:40 AM, Emilien Macchi wrote:
> > Hi,
> >
> > To scale-up our review process, we created pupept-keystone-core and it
> > worked pretty well until now.
> >
> > I propose that we continue this model and create puppet-neutron-core.
> >
> > I also propose to add Sergey Kolekonov in this group.
> > He's done a great job helping us to bring puppet-neutron rock-solid for
> > deploying OpenStack networking.
> >
> > http://stackalytics.com/?module=puppet-neutron=marks
> > http://stackalytics.com/?module=puppet-neutron=commits
> > 14 commits and 47 reviews, present on IRC during meetings & bug triage,
> > he's always helpful. He has a very good understanding of Neutron &
> > Puppet so I'm quite sure he would be a great addition.
> >
> > As usual, please vote!
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> --
> Emilien Macchi
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards,
Sergey Kolekonov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara]FFE Request for resume EDP job

2016-03-07 Thread Sergey Lukjanov
FFE granted. This feature seems to be low risk and part of already merged
feature.

Thanks.

On Mon, Mar 7, 2016 at 9:34 AM, Chad Roberts  wrote:

> +1 to Trevor's $0.02.  Seems like low risk to break any existing
> functionality and it's a good feature that really makes sense.
>
> On Mon, Mar 7, 2016 at 9:42 AM, Trevor McKay  wrote:
>
>> My 2 cents, I agree that it is low risk -- the impl for resume is
>> analogous/parallel to the impl for suspend. And, it makes little
>> sense to me to include suspend without resume.
>>
>> In my mind, these two operations are halves of the same feature,
>> and since it is already partially implemented and approved, I the
>> FFE should be granted.
>>
>> Best,
>>
>> Trev
>>
>> On Mon, 2016-03-07 at 09:07 -0500, Trevor McKay wrote:
>> > For some reason the link below is wrong for me, it goes to a different
>> > review. Here is a good one (I hope!):
>> >
>> > https://review.openstack.org/#/c/285839/
>> >
>> > Trev
>> >
>> > On Mon, 2016-03-07 at 14:28 +0800, lu jander wrote:
>> > > Hi folks,
>> > >
>> > > I would like to request a FFE for the feature “Resume EDP job”:
>> > >
>> > >
>> > >
>> > > BP:
>> > >
>> https://blueprints.launchpad.net/sahara/+spec/add-suspend-resume-ability-for-edp-jobs
>> > >
>> > >
>> > > Spec has been merged. https://review.openstack.org/#/c/198264/
>> > >
>> > >
>> > > Suspend EDP patch has been merged.
>> > >  https://review.openstack.org/#/c/201448/
>> > >
>> > >
>> > > Code Review: https://review.openstack.org/#/c/285839/
>> > >
>> > >
>> > >
>> > > code is ready for review.
>> > >
>> > >
>> > >
>> > > The Benefits for this change: after suspend job, we can resume this
>> > > job.
>> > >
>> > >
>> > >
>> > > The Risk: The risk would be low for this patch, since the code of
>> > > suspend patch has been long time reviewed.
>> > >
>> > >
>> > >
>> > > Thanks,
>> > >
>> > > luhuichun
>> > >
>> > >
>> > >
>> > >
>> __
>> > > OpenStack Development Mailing List (not for usage questions)
>> > > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Sincerely yours,
Sergey Lukjanov
Principal Software Engineer
Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?

2016-03-07 Thread Eichberger, German
Hi Sam,

So if you have some 3rd party hardware you only need to change the database 
(your steps 1-5) since the 3rd party hardware will just keep load balancing…

Now for Kevin’s case with the namespace driver:
You would need a 6th step to reschedule the loadbalancers with the V2 namespace 
driver — which can be done.

If we want to migrate to Octavia or (from one LB provider to another) it might 
be better to use the following steps:

1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools, Health 
Monitors , Members) into some JSON format file(s)
2. Delete LBaaS v1 
3. Uninstall LBaaS v1
4. Install LBaaS v2
5. Transform the JSON format file into some scripts which recreate the load 
balancers with your provider of choice — 

6. Run those scripts

The problem I see is that we will probably end up with different VIPs so the 
end user would need to change their IPs… 

Thanks,
German



On 3/6/16, 5:35 AM, "Samuel Bercovici"  wrote:

>As for a migration tool.
>Due to model changes and deployment changes between LBaaS v1 and LBaaS v2, I 
>am in favor for the following process:
>
>1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools, Health 
>Monitors , Members) into some JSON format file(s)
>2. Delete LBaaS v1 
>3. Uninstall LBaaS v1
>4. Install LBaaS v2
>5. Import the data from 1 back over LBaaS v2 (need to allow moving from 
>falvor1-->flavor2, need to make room to some custom modification for mapping 
>between v1 and v2 models)
>
>What do you think?
>
>-Sam.
>
>
>
>
>-Original Message-
>From: Fox, Kevin M [mailto:kevin@pnnl.gov] 
>Sent: Friday, March 04, 2016 2:06 AM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?
>
>Ok. Thanks for the info.
>
>Kevin
>
>From: Brandon Logan [brandon.lo...@rackspace.com]
>Sent: Thursday, March 03, 2016 2:42 PM
>To: openstack-dev@lists.openstack.org
>Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?
>
>Just for clarity, V2 did not reuse tables, all the tables it uses are only for 
>it.  The main problem is that v1 and v2 both have a pools resource, but v1 and 
>v2's pool resource have different attributes.  With the way neutron wsgi 
>works, if both v1 and v2 are enabled, it will combine both sets of attributes 
>into the same validation schema.
>
>The other problem with v1 and v2 running together was only occurring when the 
>v1 agent driver and v2 agent driver were both in use at the same time.  This 
>may actually have been fixed with some agent updates in neutron, since that is 
>common code.  It needs to be tested out though.
>
>Thanks,
>Brandon
>
>On Thu, 2016-03-03 at 22:14 +, Fox, Kevin M wrote:
>> Just because you had thought no one was using it outside of a PoC doesn't 
>> mean folks aren''t using it in production.
>>
>> We would be happy to migrate to Octavia. We were planning on doing just that 
>> by running both v1 with haproxy namespace, and v2 with Octavia and then pick 
>> off upgrading lb's one at a time, but the reuse of the v1 tables really was 
>> an unfortunate decision that blocked that activity.
>>
>> We're still trying to figure out a path forward.
>>
>> We have an outage window next month. after that, it could be about 6 
>> months before we could try a migration due to production load picking 
>> up for a while. I may just have to burn out all the lb's switch to v2, 
>> then rebuild them by hand in a marathon outage :/
>>
>> And then there's this thingy that also critically needs fixing:
>> https://bugs.launchpad.net/neutron/+bug/1457556
>>
>> Thanks,
>> Kevin
>> 
>> From: Eichberger, German [german.eichber...@hpe.com]
>> Sent: Thursday, March 03, 2016 12:47 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?
>>
>> Kevin,
>>
>>  If we are offering a migration tool it would be namespace -> 
>> namespace (or maybe Octavia since [1]) - given the limitations nobody 
>> should be using the namespace driver outside a PoC so I am a bit 
>> confused why customers can't self migrate. With 3rd party Lbs I would 
>> assume vendors proving those scripts to make sure their particular 
>> hardware works with those. If you indeed need a migration from LBaaS 
>> V1 namespace -> LBaaS V2 namespace/Octavia please file an RfE with 
>> your use case so we can discuss it further...
>>
>> Thanks,
>> German
>>
>> [1] https://review.openstack.org/#/c/286380
>>
>> From: "Fox, Kevin M" >
>> Reply-To: "OpenStack Development Mailing List (not for usage 
>> questions)" 
>> > k.org>>
>> Date: Wednesday, March 2, 2016 at 5:17 PM
>> To: "OpenStack Development Mailing List (not for usage questions)" 
>> 

[openstack-dev] [nova][pci] What is the point of the ALLOCATED vs. CLAIMED device status?

2016-03-07 Thread Jay Pipes

Subject says it all.

I've been trying to fix this bug:

https://bugs.launchpad.net/nova/+bug/1549984

and just shake my head every time I look at the PCI handling code in 
nova/pci/manager.py and nova/pci/stats.py.


Why do we have a CLAIMED state as well as an ALLOCATED state?

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] CI jobs failures

2016-03-07 Thread Ben Nemec
 anyways.
> The downside is that we'd probably hit less timing errors (assuming
> the tight resources is whats showing them up), I say downside because
> this just means downstream users might hit them more often if CI
> isn't. Anyways maybe worth discussing at tomorrows meeting.

+1 to reducing the number of testenvs and allocating more memory to
each.  The huge number of rechecks we're having to do is definitely
contributing to our CI load in a big way, so if we could cut those down
by 50% I bet it would offset the lost testenvs.  And it would reduce
developer aggravation by about a million percent. :-)

Also, on some level I'm not too concerned about the absolute minimum
memory use case.  Nobody deploying OpenStack in the real world is doing
so on 4 GB nodes.  I doubt 99% of them are doing so on less than 32 GB
nodes.  Until we have composable services, I don't know that we can
support the 4 GB use case anymore.  We've just added too many services
to the overcloud.

That said though, keeping service memory usage under control is still
valuable and we should figure out why Swift is using so much memory when
it's not under much load at all.  That's actually the undercloud, so
it's sort of tangential to this discussion.

> 
> 
>>
>> [1] - 
>> http://logs.openstack.org/85/289085/2/check-tripleo/gate-tripleo-ci-f22-nonha/6fda33c/
>> [2] - http://goodsquishy.com/downloads/20160307/swap.png
>> [3] - 22:09:03 21678 INFO [-] Master cache miss for image
>> b6a96213-7955-4c4d-829e-871350939e03, starting download
>>   22:09:41 21678 DEBUG [-] Running cmd (subprocess): qemu-img info
>> /var/lib/ironic/master_images/tmpvjAlCU/b6a96213-7955-4c4d-829e-871350939e03.part
>> [4] - 17690 swift 20   0  804824 547724   1780 S   0.0 10.8
>> 0:04.82 swift-prox+
>>
>>
>>>
>>> The recent change to add swap to the overcloud nodes has proved to be
>>> unstable. But that has more to do with it being racey with the
>>> validation deployment afaict. There are some patches currently up to
>>> address those issues.
>>>
>>>>
>>>>
>>>> 2/ Split CI jobs in scenarios.
>>>>
>>>> Currently we have CI jobs for ceph, HA, non-ha, containers and the
>>>> current situation is that jobs fail randomly, due to performances issues.
>>
>> We don't know it due to performance issues, Your probably correct that
>> we wouldn't see them if we were allocating more resources to the ci
>> tests but this just means we have timing issues that are more
>> prevalent when resource constrained, I think that answer here is for
>> somebody to spend the time root cause each false negative we get and
>> fix where appropriate and then keep doing it, timing issues will
>> continue to sneak in if we're not keeping on top of it we get into the
>> recheck hell we're currently in.
>>
>>>>
>>>> Puppet OpenStack CI had the same issue where we had one integration job
>>>> and we never stopped adding more services until all becomes *very*
>>>> unstable. We solved that issue by splitting the jobs and creating 
>>>> scenarios:
>>>>
>>>> https://github.com/openstack/puppet-openstack-integration#description
>>>>
>>>> What I propose is to split TripleO jobs in more jobs, but with less
>>>> services.
>>>>
>>>> The benefit of that:
>>>>
>>>> * more services coverage
>>>> * jobs will run faster
>>>> * less random issues due to bad performances
>>>>
>>>> The cost is of course it will consume more resources.
>>>> That's why I suggest 3/.
>>>>
>>>> We could have:
>>>>
>>>> * HA job with ceph and a full compute scenario (glance, nova, cinder,
>>>> ceilometer, aodh & gnocchi).
>>>> * Same with IPv6 & SSL.
>>>> * HA job without ceph and full compute scenario too
>>>> * HA job without ceph and basic compute (glance and nova), with extra
>>>> services like Trove, Sahara, etc.
>>>> * ...
>>>> (note: all jobs would have network isolation, which is to me a
>>>> requirement when testing an installer like TripleO).
>>>
>>> Each of those jobs would at least require as much memory as our
>>> current HA job. I don't see how this gets us to using less memory. The
>>> HA job we have now already deploys the minimal amount of services that
>>> is possible given our current architecture. Without the composable
>>> service roles work, we can't deploy less services than we already are.
>>
>> Ya, th

[openstack-dev] [nova][ceilometer] unable to see compute.node.cpu.* metrics in ceilometer

2016-03-07 Thread Kapil
Hi

I enabled ComputeDriverCPUMonitor in nova.conf on one of the compute nodes,
restarted nova-compute, ceilometer-agent-compute on the compute node and
ceilometer-collector, ceilometer-api, ceilometer-agent-central,
ceilometer-agent-notification on the controller node.
However, I cannot see the cpu meters or samples in ceilometer.

Any suggestions to what may be the issue ?

Thanks
Kapil
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] CI jobs failures

2016-03-07 Thread Ben Nemec
On 03/07/2016 12:00 PM, Derek Higgins wrote:
> On 7 March 2016 at 12:11, John Trowbridge  wrote:
>>
>>
>> On 03/06/2016 11:58 AM, James Slagle wrote:
>>> On Sat, Mar 5, 2016 at 11:15 AM, Emilien Macchi  wrote:
 I'm kind of hijacking Dan's e-mail but I would like to propose some
 technical improvements to stop having so much CI failures.


 1/ Stop creating swap files. We don't have SSD, this is IMHO a terrible
 mistake to swap on files because we don't have enough RAM. In my
 experience, swaping on non-SSD disks is even worst that not having
 enough RAM. We should stop doing that I think.
>>>
>>> We have been relying on swap in tripleo-ci for a little while. While
>>> not ideal, it has been an effective way to at least be able to test
>>> what we've been testing given the amount of physical RAM that is
>>> available.
>>>
>>> The recent change to add swap to the overcloud nodes has proved to be
>>> unstable. But that has more to do with it being racey with the
>>> validation deployment afaict. There are some patches currently up to
>>> address those issues.
>>>


 2/ Split CI jobs in scenarios.

 Currently we have CI jobs for ceph, HA, non-ha, containers and the
 current situation is that jobs fail randomly, due to performances issues.

 Puppet OpenStack CI had the same issue where we had one integration job
 and we never stopped adding more services until all becomes *very*
 unstable. We solved that issue by splitting the jobs and creating 
 scenarios:

 https://github.com/openstack/puppet-openstack-integration#description

 What I propose is to split TripleO jobs in more jobs, but with less
 services.

 The benefit of that:

 * more services coverage
 * jobs will run faster
 * less random issues due to bad performances

 The cost is of course it will consume more resources.
 That's why I suggest 3/.

 We could have:

 * HA job with ceph and a full compute scenario (glance, nova, cinder,
 ceilometer, aodh & gnocchi).
 * Same with IPv6 & SSL.
 * HA job without ceph and full compute scenario too
 * HA job without ceph and basic compute (glance and nova), with extra
 services like Trove, Sahara, etc.
 * ...
 (note: all jobs would have network isolation, which is to me a
 requirement when testing an installer like TripleO).
>>>
>>> Each of those jobs would at least require as much memory as our
>>> current HA job. I don't see how this gets us to using less memory. The
>>> HA job we have now already deploys the minimal amount of services that
>>> is possible given our current architecture. Without the composable
>>> service roles work, we can't deploy less services than we already are.
>>>
>>>
>>>

 3/ Drop non-ha job.
 I'm not sure why we have it, and the benefit of testing that comparing
 to HA.
>>>
>>> In my opinion, I actually think that we could drop the ceph and non-ha
>>> job from the check-tripleo queue.
>>>
>>> non-ha doesn't test anything realistic, and it doesn't really provide
>>> any faster feedback on patches. It seems at most it might run 15-20
>>> minutes faster than the HA job on average. Sometimes it even runs
>>> slower than the HA job.
>>>
>>> The ceph job we could move to the experimental queue to run on demand
>>> on patches that might affect ceph, and it could also be a daily
>>> periodic job.
>>>
>>> The same could be done for the containers job, an IPv6 job, and an
>>> upgrades job. Ideally with a way to run an individual job as needed.
>>> Would we need different experimental queues to do that?
>>>
>>> That would leave only the HA job in the check queue, which we should
>>> run with SSL and network isolation. We could deploy less testenv's
>>> since we'd have less jobs running, but give the ones we do deploy more
>>> RAM. I think this would really alleviate a lot of the transient
>>> intermittent failures we get in CI currently. It would also likely run
>>> faster.
>>>
>>> It's probably worth seeking out some exact evidence from the RDO
>>> centos-ci, because I think they are testing with virtual environments
>>> that have a lot more RAM than tripleo-ci does. It'd be good to
>>> understand if they have some of the transient failures that tripleo-ci
>>> does as well.
>>>
>>
>> The HA job in RDO CI is also more unstable than nonHA, although this is
>> usually not to do with memory contention. Most of the time that I see
>> the HA job fail spuriously in RDO CI, it is because of the Nova
>> scheduler race. I would bet that this race is the cause for the
>> fluctuating amount of time jobs take as well, because the recovery
>> mechanism for this is just to retry. Those retries can add 15 min. per
>> retry to the deploy. In RDO CI there is a 60min. timeout for deploy as
>> well. If we can't deploy to virtual machines in under an hour, to me
>> that is a bug. 

Re: [openstack-dev] [tripleo] CI jobs failures

2016-03-07 Thread Derek Higgins
On 7 March 2016 at 12:11, John Trowbridge  wrote:
>
>
> On 03/06/2016 11:58 AM, James Slagle wrote:
>> On Sat, Mar 5, 2016 at 11:15 AM, Emilien Macchi  wrote:
>>> I'm kind of hijacking Dan's e-mail but I would like to propose some
>>> technical improvements to stop having so much CI failures.
>>>
>>>
>>> 1/ Stop creating swap files. We don't have SSD, this is IMHO a terrible
>>> mistake to swap on files because we don't have enough RAM. In my
>>> experience, swaping on non-SSD disks is even worst that not having
>>> enough RAM. We should stop doing that I think.
>>
>> We have been relying on swap in tripleo-ci for a little while. While
>> not ideal, it has been an effective way to at least be able to test
>> what we've been testing given the amount of physical RAM that is
>> available.
>>
>> The recent change to add swap to the overcloud nodes has proved to be
>> unstable. But that has more to do with it being racey with the
>> validation deployment afaict. There are some patches currently up to
>> address those issues.
>>
>>>
>>>
>>> 2/ Split CI jobs in scenarios.
>>>
>>> Currently we have CI jobs for ceph, HA, non-ha, containers and the
>>> current situation is that jobs fail randomly, due to performances issues.
>>>
>>> Puppet OpenStack CI had the same issue where we had one integration job
>>> and we never stopped adding more services until all becomes *very*
>>> unstable. We solved that issue by splitting the jobs and creating scenarios:
>>>
>>> https://github.com/openstack/puppet-openstack-integration#description
>>>
>>> What I propose is to split TripleO jobs in more jobs, but with less
>>> services.
>>>
>>> The benefit of that:
>>>
>>> * more services coverage
>>> * jobs will run faster
>>> * less random issues due to bad performances
>>>
>>> The cost is of course it will consume more resources.
>>> That's why I suggest 3/.
>>>
>>> We could have:
>>>
>>> * HA job with ceph and a full compute scenario (glance, nova, cinder,
>>> ceilometer, aodh & gnocchi).
>>> * Same with IPv6 & SSL.
>>> * HA job without ceph and full compute scenario too
>>> * HA job without ceph and basic compute (glance and nova), with extra
>>> services like Trove, Sahara, etc.
>>> * ...
>>> (note: all jobs would have network isolation, which is to me a
>>> requirement when testing an installer like TripleO).
>>
>> Each of those jobs would at least require as much memory as our
>> current HA job. I don't see how this gets us to using less memory. The
>> HA job we have now already deploys the minimal amount of services that
>> is possible given our current architecture. Without the composable
>> service roles work, we can't deploy less services than we already are.
>>
>>
>>
>>>
>>> 3/ Drop non-ha job.
>>> I'm not sure why we have it, and the benefit of testing that comparing
>>> to HA.
>>
>> In my opinion, I actually think that we could drop the ceph and non-ha
>> job from the check-tripleo queue.
>>
>> non-ha doesn't test anything realistic, and it doesn't really provide
>> any faster feedback on patches. It seems at most it might run 15-20
>> minutes faster than the HA job on average. Sometimes it even runs
>> slower than the HA job.
>>
>> The ceph job we could move to the experimental queue to run on demand
>> on patches that might affect ceph, and it could also be a daily
>> periodic job.
>>
>> The same could be done for the containers job, an IPv6 job, and an
>> upgrades job. Ideally with a way to run an individual job as needed.
>> Would we need different experimental queues to do that?
>>
>> That would leave only the HA job in the check queue, which we should
>> run with SSL and network isolation. We could deploy less testenv's
>> since we'd have less jobs running, but give the ones we do deploy more
>> RAM. I think this would really alleviate a lot of the transient
>> intermittent failures we get in CI currently. It would also likely run
>> faster.
>>
>> It's probably worth seeking out some exact evidence from the RDO
>> centos-ci, because I think they are testing with virtual environments
>> that have a lot more RAM than tripleo-ci does. It'd be good to
>> understand if they have some of the transient failures that tripleo-ci
>> does as well.
>>
>
> The HA job in RDO CI is also more unstable than nonHA, although this is
> usually not to do with memory contention. Most of the time that I see
> the HA job fail spuriously in RDO CI, it is because of the Nova
> scheduler race. I would bet that this race is the cause for the
> fluctuating amount of time jobs take as well, because the recovery
> mechanism for this is just to retry. Those retries can add 15 min. per
> retry to the deploy. In RDO CI there is a 60min. timeout for deploy as
> well. If we can't deploy to virtual machines in under an hour, to me
> that is a bug. (Note, I am speaking of `openstack overcloud deploy` when
> I say deploy, though start to finish can take less than an hour with
> decent CPUs)
>
> RDO CI uses the 

Re: [openstack-dev] [fuel] Newton PTL and CL elections

2016-03-07 Thread Jeremy Stanley
On 2016-03-07 09:49:02 -0800 (-0800), Dmitry Borodaenko wrote:
> Then that's what we will do, thanks.

Also, our official electorate roll generation aggregates potential
voter pools on a per-team basis according to definitions in
openstack/governance, so may not be fine-grained enough if you
intend to have separate rolls for your different subsystem
deliverables anyway.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Newton PTL and CL elections

2016-03-07 Thread Dmitry Borodaenko
On Mon, Mar 07, 2016 at 09:37:57AM +0100, Thierry Carrez wrote:
> Dmitry Borodaenko wrote:
> >Updated dates based on openstack/election events.yaml:
> >
> >PTL self-nomination: March 11-17
> >PTL election: March 18-24
> >CL self-nomination: March 25-31
> >CL election: April 1-7
> >
> >Can we fit the component leads election into the same process (i.e.
> >component lead candidates would self-nominate by submitting
> >candidates///.txt files to
> >openstack/election)?
> 
> I think the election officials already have their hands full with the
> elections required by our governance, they probably can't handle specific
> elections for custom subroles in project teams (like the component leads
> that Fuel seems to want).
> 
> I'd recommend that you run that part yourselves once the PTL is elected.

Then that's what we will do, thanks.

-- 
Dmitry Borodaenko

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] exception for backporting upgrades to liberty/stable

2016-03-07 Thread Steven Dake (stdake)


On 3/7/16, 10:16 AM, "Paul Bourke"  wrote:

>This is a messy topic. I feel there's been some miscommunication and
>confusion on the issue which hopefully I can sum up.
>
>As far as I remember it, Sam is correct, we did always plan to do
>Liberty upgrades. However, over the course of time post Tokyo these
>plans didn't really materialise, at which point Micheal kindly stepped
>forward to kick things into action [0].
>
>Between now and then the focus shifted to "how do we get from Liberty to
>Mitaka", and the original discussion of moving between minor Liberty
>releases fell by the wayside. I think this is where the confusion has
>arisen.

This is accurate.  It wasn¹t a matter of not materializing, however, we
were capacity constrained.  We will have upgrades working for Mitaka but
it required a lot of everyone's time.

Regards
-stev

>
>As I mentioned before I have been opposed to backporting features to
>stable/liberty, as this is against the philosophy of a stable branch
>etc. etc. However, as has been mentioned many times before, this is a
>new project, hindsight is a great thing. Ideally users running Liberty
>who encounter a CVE could be told to go to Mitaka, but in reality this
>is an unreasonable expectation and operators who have gone on our
>previous release notes that Liberty is ready to rock will feel screwed
>over.
>
>So based on the above I am +1 to get upgrades into Liberty, I hope this
>makes sense.
>
>Regards,
>-Paul
>
>[0] 
>http://lists.openstack.org/pipermail/openstack-dev/2015-December/081467.ht
>ml
>
>On 07/03/16 16:00, Sam Yaple wrote:
>> On Mon, Mar 7, 2016 at 3:03 PM, Steven Dake (stdake) > > wrote:
>>
>> Hi folks,
>>
>> It was never really discussed if we would back-port upgrades to
>> liberty.  This came up during  an irc conversation Friday [1], and a
>> vote was requested.  Tthe facts of the discussion distilled are:
>>
>>   * We never agreed as a group to do back-port of upgrade during our
>> back-port discussion
>>   * An operator that can't upgrade her Z version of Kolla (1.1.1
>> from 1.1.0) is stuck without CVE or OSSA corrections.
>>   * Because of a lack of security upgrades, the individual
>> responsible for executing the back-port would abandon the work
>> (but not tuse the abaondon feature for of gerrit for changes
>> already in the queue)
>>
>> Since we never agreed, in that IRC discussion a vote was requested,
>> and I am administering the vote.  The vote request was specifically
>> should we have a back-port of upgrade in 1.1.0.  Both parties agreed
>> they would live with the results.
>>
>> I would like to point out that not porting upgrades means that the
>> liberty branch would essentially become abandoned unless some other
>> brave soul takes up a backport.  I would also like to point out that
>> that is yet another exception much like thin-containers back-port
>> which was accepted.  See how exceptions become the way to the dark
>> side.  We really need to stay exception free going forward (in
>> Mitaka and later) as much as possible to prevent expectations that
>> we will make exceptions when none should be made.
>>
>> This is not an exception. This was always a requirement. If you disagree
>> with that then we have never actually had a stable branch. The fact is
>> we _always_ needed z version upgrades for Kolla. It was _always_ the
>> plan to have them. Feel free to reference the IRC logs and our prior
>> mid-cycle and our Tokyo upgrade sessions. What changed was time and
>> peoples memories and that's why this is even a conversation.
>>
>> Please vote +1 (backport) or ­1 (don¹t backport).  An abstain in
>> this case is the same as voting ­1, so please vote either way.  I
>> will leave the voting open for 1 week until April 14th.  If there I
>> a majority in favor, I will close voting early.  We currently
>> require 6 votes for majority as our core team consists of 11 people.
>>
>> Regards,
>> -steve
>>
>>
>> [1]
>> 
>>http://eavesdrop.openstack.org/irclogs/%23kolla/%23kolla.2016-03-04.log.h
>>tml#t2016-03-04T18:23:26
>>
>> Warning [1] was a pretty heated argument and there may have been
>> some swearing :)
>>
>> voting.
>>
>> "Should we back-port upgrades
>>
>> 
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> Obviously I am +1 on committing to a stable _/stable_/ branch. And that
>> has always required Z upgrades. Luckily the work we did in master is
>> also usable for z 

Re: [openstack-dev] [tripleo] CI jobs failures

2016-03-07 Thread Derek Higgins
On 7 March 2016 at 15:24, Derek Higgins <der...@redhat.com> wrote:
> On 6 March 2016 at 16:58, James Slagle <james.sla...@gmail.com> wrote:
>> On Sat, Mar 5, 2016 at 11:15 AM, Emilien Macchi <emil...@redhat.com> wrote:
>>> I'm kind of hijacking Dan's e-mail but I would like to propose some
>>> technical improvements to stop having so much CI failures.
>>>
>>>
>>> 1/ Stop creating swap files. We don't have SSD, this is IMHO a terrible
>>> mistake to swap on files because we don't have enough RAM. In my
>>> experience, swaping on non-SSD disks is even worst that not having
>>> enough RAM. We should stop doing that I think.
>>
>> We have been relying on swap in tripleo-ci for a little while. While
>> not ideal, it has been an effective way to at least be able to test
>> what we've been testing given the amount of physical RAM that is
>> available.
>
> Ok, so I have a few points here, in places where I'm making
> assumptions I'll try to point it out
>
> o Yes I agree using swap should be avoided if at all possible
>
> o We are currently looking into adding more RAM to our testenv hosts,
> it which point we can afford to be a little more liberal with Memory
> and this problem should become less of an issue, having said that
>
> o Even though using swap is bad, if we have some processes with a
> large Mem footprint that don't require constant access to a portion of
> the footprint swaping it out over the duration of the CI test isn't as
> expensive as it would suggest (assuming it doesn't need to be swapped
> back in and the kernel has selected good candidates to swap out)
>
> o The test envs that host the undercloud and overcloud nodes have 64G
> of RAM each, they each host 4 testenvs and each test env if running a
> HA job can use up to 21G of RAM so we have over committed there, it
> this is only a problem if a test env host gets 4 HA jobs that are
> started around the same time (and as a result a each have 4 overcloud
> nodes running at the same time), to allow this to happen without VM's
> being killed by the OOM we've also enabled swap there. The majority of
> the time this swap isn't in use, only if all 4 testenvs are being
> simultaneously used and they are all running the second half of a CI
> test at the same time.
>
> o The overcloud nodes are VM's running with a "unsafe" disk caching
> mechanism, this causes sync requests from guest to be ignored and as a
> result if the instances being hosted on these nodes are going into
> swap this swap will be cached on the host as long as RAM is available.
> i.e. swap being used in the undercloud or overcloud isn't being synced
> to the disk on the host unless it has to be.
>
> o What I'd like us to avoid is simply bumping up the memory every time
> we hit a OOM error without at least
>   1. Explaining why we need more memory all of a sudden
>   2. Looking into a way we may be able to avoid simply bumping the RAM
> (at peak times we are memory constrained)
>
> as an example, Lets take a look at the swap usage on the undercloud of
> a recent ci nonha job[1][2], These insances have 5G of RAM with 2G or
> swap enabled via a swapfile
> the overcloud deploy started @22:07:46 and finished at @22:28:06
>
> In the graph you'll see a spike in memory being swapped out around
> 22:09, this corresponds almost exactly to when the overcloud image is
> being downloaded from swift[3], looking the top output at the end of
> the test you'll see that swift-proxy is using over 500M of Mem[4].
>
> I'd much prefer we spend time looking into why the swift proxy is
> using this much memory rather then blindly bump the memory allocated
> to the VM, perhaps we have something configured incorrectly or we've
> hit a bug in swift.
>
> Having said all that we can bump the memory allocated to each node but
> we have to accept 1 of 2 possible consequences
> 1. We'll env up using the swap on the testenv hosts more then we
> currently are or
> 2. We'll have to reduce the number of test envs per host from 4 down
> to 3, wiping 25% of our capacity

Thinking about this a little more, we could do a radical experiment
for a week and just do this, i.e. bump up the RAM on each env and
accept we loose 25 of our capacity, maybe it doesn't matter, if our
success rate goes up then we'd be running less rechecks anyways.
The downside is that we'd probably hit less timing errors (assuming
the tight resources is whats showing them up), I say downside because
this just means downstream users might hit them more often if CI
isn't. Anyways maybe worth discussing at tomorrows meeting.


>
> [1] - 
> http://logs.openstack.org/85/289085/2/check-tripleo/gate-tripleo-ci-f22-nonha/6fda33c/
&g

Re: [openstack-dev] [sahara]FFE Request for resume EDP job

2016-03-07 Thread Chad Roberts
+1 to Trevor's $0.02.  Seems like low risk to break any existing
functionality and it's a good feature that really makes sense.

On Mon, Mar 7, 2016 at 9:42 AM, Trevor McKay  wrote:

> My 2 cents, I agree that it is low risk -- the impl for resume is
> analogous/parallel to the impl for suspend. And, it makes little
> sense to me to include suspend without resume.
>
> In my mind, these two operations are halves of the same feature,
> and since it is already partially implemented and approved, I the
> FFE should be granted.
>
> Best,
>
> Trev
>
> On Mon, 2016-03-07 at 09:07 -0500, Trevor McKay wrote:
> > For some reason the link below is wrong for me, it goes to a different
> > review. Here is a good one (I hope!):
> >
> > https://review.openstack.org/#/c/285839/
> >
> > Trev
> >
> > On Mon, 2016-03-07 at 14:28 +0800, lu jander wrote:
> > > Hi folks,
> > >
> > > I would like to request a FFE for the feature “Resume EDP job”:
> > >
> > >
> > >
> > > BP:
> > >
> https://blueprints.launchpad.net/sahara/+spec/add-suspend-resume-ability-for-edp-jobs
> > >
> > >
> > > Spec has been merged. https://review.openstack.org/#/c/198264/
> > >
> > >
> > > Suspend EDP patch has been merged.
> > >  https://review.openstack.org/#/c/201448/
> > >
> > >
> > > Code Review: https://review.openstack.org/#/c/285839/
> > >
> > >
> > >
> > > code is ready for review.
> > >
> > >
> > >
> > > The Benefits for this change: after suspend job, we can resume this
> > > job.
> > >
> > >
> > >
> > > The Risk: The risk would be low for this patch, since the code of
> > > suspend patch has been long time reviewed.
> > >
> > >
> > >
> > > Thanks,
> > >
> > > luhuichun
> > >
> > >
> > >
> > >
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Proposal: changes to our current testing process

2016-03-07 Thread Jeremy Stanley
On 2016-03-07 23:54:49 +0800 (+0800), Duncan Thomas wrote:
> Complexity can be tricky to spot by hand, and expecting reviewers to get it
> right all of the time is not a reasonable expectation.
> 
> My ideal would be something that processes the commit and the jenkins logs,
> extracts the timing info of any new tests, and if they are outside some
> (fairly tight) window, then posts a comment to the review indicating that
> these tests should get closer scrutiny. This does not remove reviewer
> judgement from the equation, just provides a helpful prod that there's
> something to be considered.

Has any analysis been performed on the existing body of test timing
data so far (e.g., by querying the subunit2sql data used for
http://status.openstack.org/openstack-health/#/job/gate-cinder-python27?groupKey=project=hour
 >)?
I suspect you'll find that evaluating durations of tests on the
basis of individual runs is fraught with false positive results due
to the significant variability in our CI workers across different
service providers. If it really were predictably consistent, then
it seems like just adding a timeout fixture at whatever you
determine is a sane duration would be sufficient.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Proposal: changes to our current testing process

2016-03-07 Thread John Griffith
On Mon, Mar 7, 2016 at 8:57 AM, Knight, Clinton 
wrote:

>
>
> On 3/7/16, 10:45 AM, "Eric Harney"  wrote:
>
> >On 03/06/2016 09:35 PM, John Griffith wrote:
> >> On Sat, Mar 5, 2016 at 4:27 PM, Jay S. Bryant
> >> >>> wrote:
> >>
> >>> Ivan,
> >>>
> >>> I agree that our testing needs improvement.  Thanks for starting this
> >>> thread.
> >>>
> >>> With regards to adding a hacking check for tests that run too long ...
> >>>are
> >>> you thinking that we would have a timer that checks or long running
> >>>jobs or
> >>> something that checks for long sleeps in the testing code?  Just
> >>>curious
> >>> your ideas for tackling that situation.  Would be interested in helping
> >>> with that, perhaps.
> >>>
> >>> Thanks!
> >>> Jay
> >>>
> >>>
> >>> On 03/02/2016 05:25 AM, Ivan Kolodyazhny wrote:
> >>>
> >>> Hi Team,
> >>>
> >>> Here are my thoughts and proposals how to make Cinder testing process
> >>> better. I won't cover "3rd party CI's" topic here. I will share my
> >>>opinion
> >>> about current and feature jobs.
> >>>
> >>>
> >>> Unit-tests
> >>>
> >>>- Long-running tests. I hope, everybody will agree that unit-tests
> >>>must be quite simple and very fast. Unit tests which takes more
> >>>than 3-5
> >>>seconds should be refactored and/or moved to 'integration' tests.
> >>>Thanks to Tom Barron for several fixes like [1]. IMO, we it would be
> >>>good to have some hacking checks to prevent such issues in a future.
> >>>
> >>>- Tests coverage. We don't check it in an automatic way on gates.
> >>>Usually, we require to add some unit-tests during code review
> >>>process. Why
> >>>can't we add coverage job to our CI and do not merge new patches,
> >>>with
> >>>will decrease tests coverage rate? Maybe, such job could be voting
> >>>in a
> >>>future to not ignore it. For now, there is not simple way to check
> >>>coverage
> >>>because 'tox -e cover' output is not useful [2].
>
> The Manila project has a coverage job that may be of interest to Cinder.
> It’s not perfect, because sometimes the periodic loopingcall routines run
> during the test run and sometimes not, leading to false negatives.  But
> most of the time it’s a handy confirmation that the unit test coverage
> didn’t decline due to a patch.  Look at the manila-coverage job in this
> example:  https://review.openstack.org/#/c/287575/
>
> >>>
> >>>
> >>> Functional tests for Cinder
> >>>
> >>> We introduced some functional tests last month [3]. Here is a patch to
> >>> infra to add new job [4]. Because these tests were moved from
> >>>unit-tests, I
> >>> think we're OK to make this job voting. Such tests should not be a
> >>> replacement for Tempest. They even could tests Cinder with Fake Driver
> >>>to
> >>> make it faster and not related on storage backends issues.
> >>>
> >>>
> >>> Tempest in-tree tests
> >>>
> >>> Sean started work on it [5] and I think it's a good idea to get them in
> >>> Cinder repo to run them on Tempest jobs and 3-rd party CIs against a
> >>>real
> >>> backend.
> >>>
> >>>
> >>> Functional tests for python-brick-cinderclient-ext
> >>>
> >>> There are patches that introduces functional tests [6] and new job [7].
> >>>
> >>>
> >>> Functional tests for python-cinderclient
> >>>
> >>> We've got a very limited set of such tests and non-voting job. IMO, we
> >>>can
> >>> run them even with Cinder Fake Driver to make them not depended on a
> >>> storage backend and make it faster. I believe, we can make this job
> >>>voting
> >>> soon. Also, we need more contributors to this kind of tests.
> >>>
> >>>
> >>> Integrated tests for python-cinderclient
> >>>
> >>> We need such tests to make sure that we won't break Nova, Heat or other
> >>> python-cinderclient consumers with a next merged patch. There is a
> >>>thread
> >>> in openstack-dev ML about such tests [8] and proposal [9] to introduce
> >>>them
> >>> to python-cinderclient.
> >>>
> >>>
> >>> Rally tests
> >>>
> >>> IMO, it would be good to have new Rally scenarios for every patches
> >>>like
> >>> 'improves performance', 'fixes concurrency issues', etc. Even if we as
> >>>a
> >>> Cinder community don't have enough time to implement them, we have to
> >>>ask
> >>> for them in reviews, openstack-dev ML, file Rally bugs and blueprints
> >>>if
> >>> needed.
> >>>
> >>>
> >>> [1] https://review.openstack.org/#/c/282861/
> >>> [2] http://paste.openstack.org/show/488925/
> >>> [3] https://review.openstack.org/#/c/267801/
> >>> [4] https://review.openstack.org/#/c/287115/
> >>> [5] https://review.openstack.org/#/c/274471/
> >>> [6] https://review.openstack.org/#/c/265811/
> >>> [7] https://review.openstack.org/#/c/265925/
> >>> [8]
> >>>
> >>>
> http://lists.openstack.org/pipermail/openstack-dev/2016-March/088027.htm
> >>>l
> >>> [9] https://review.openstack.org/#/c/279432/
> >>>
> >>>
> >>> Regards,
> >>> Ivan Kolodyazhny,
> >>> http://blog.e0ne.info/
> >>>
> >>>
> 

Re: [openstack-dev] [kolla][vote] exception for backporting upgrades to liberty/stable

2016-03-07 Thread Paul Bourke
This is a messy topic. I feel there's been some miscommunication and 
confusion on the issue which hopefully I can sum up.


As far as I remember it, Sam is correct, we did always plan to do 
Liberty upgrades. However, over the course of time post Tokyo these 
plans didn't really materialise, at which point Micheal kindly stepped 
forward to kick things into action [0].


Between now and then the focus shifted to "how do we get from Liberty to 
Mitaka", and the original discussion of moving between minor Liberty 
releases fell by the wayside. I think this is where the confusion has 
arisen.


As I mentioned before I have been opposed to backporting features to 
stable/liberty, as this is against the philosophy of a stable branch 
etc. etc. However, as has been mentioned many times before, this is a 
new project, hindsight is a great thing. Ideally users running Liberty 
who encounter a CVE could be told to go to Mitaka, but in reality this 
is an unreasonable expectation and operators who have gone on our 
previous release notes that Liberty is ready to rock will feel screwed over.


So based on the above I am +1 to get upgrades into Liberty, I hope this 
makes sense.


Regards,
-Paul

[0] 
http://lists.openstack.org/pipermail/openstack-dev/2015-December/081467.html


On 07/03/16 16:00, Sam Yaple wrote:

On Mon, Mar 7, 2016 at 3:03 PM, Steven Dake (stdake) > wrote:

Hi folks,

It was never really discussed if we would back-port upgrades to
liberty.  This came up during  an irc conversation Friday [1], and a
vote was requested.  Tthe facts of the discussion distilled are:

  * We never agreed as a group to do back-port of upgrade during our
back-port discussion
  * An operator that can't upgrade her Z version of Kolla (1.1.1
from 1.1.0) is stuck without CVE or OSSA corrections.
  * Because of a lack of security upgrades, the individual
responsible for executing the back-port would abandon the work
(but not tuse the abaondon feature for of gerrit for changes
already in the queue)

Since we never agreed, in that IRC discussion a vote was requested,
and I am administering the vote.  The vote request was specifically
should we have a back-port of upgrade in 1.1.0.  Both parties agreed
they would live with the results.

I would like to point out that not porting upgrades means that the
liberty branch would essentially become abandoned unless some other
brave soul takes up a backport.  I would also like to point out that
that is yet another exception much like thin-containers back-port
which was accepted.  See how exceptions become the way to the dark
side.  We really need to stay exception free going forward (in
Mitaka and later) as much as possible to prevent expectations that
we will make exceptions when none should be made.

This is not an exception. This was always a requirement. If you disagree
with that then we have never actually had a stable branch. The fact is
we _always_ needed z version upgrades for Kolla. It was _always_ the
plan to have them. Feel free to reference the IRC logs and our prior
mid-cycle and our Tokyo upgrade sessions. What changed was time and
peoples memories and that's why this is even a conversation.

Please vote +1 (backport) or –1 (don’t backport).  An abstain in
this case is the same as voting –1, so please vote either way.  I
will leave the voting open for 1 week until April 14th.  If there I
a majority in favor, I will close voting early.  We currently
require 6 votes for majority as our core team consists of 11 people.

Regards,
-steve


[1]

http://eavesdrop.openstack.org/irclogs/%23kolla/%23kolla.2016-03-04.log.html#t2016-03-04T18:23:26

Warning [1] was a pretty heated argument and there may have been
some swearing :)

voting.

"Should we back-port upgrades

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Obviously I am +1 on committing to a stable _/stable_/ branch. And that
has always required Z upgrades. Luckily the work we did in master is
also usable for z upgrades. Without the ability to perform an update
after a vulnerability, we have to stable branch.

I would remind every one we _did_ have this conversation in Tokyo when
we discussed pinning to stable versions of other projects rather than
using head of thier stable branch. We currently do that and there is a
review for a tool to make that even easier to maintain [1]. There was
even talk of a bot to purpose these bumps in versions. That means z
upgrades were expected for Liberty. Anyone 

[openstack-dev] [nova] bugs team meeting: March 8th 2016 1800 UTC

2016-03-07 Thread Markus Zoeller
I'd like to remind you that tomorrow is the nova-bugs-team IRC meeting
at 18:00 UTC [1]. If you want to contribute to the nova bugs area
feel free to join. I prepared a list of potentially stale bugs [2]
which need a cleanup/check by a human and I had the hope that some
of you would pick 2-3 of that list and check them. If you are new to 
Nova you can join and ask questions about the process (a summary at [3]).
We also need volunteers for one week of skimming new bugs [4] if they 
are valid and fulfill the minimum of necessary information.
As we are heading to the Mitaka release candidate phase, these tasks
are an important step to a stable product.

References:
[1] https://wiki.openstack.org/wiki/Meetings/Nova/BugsTeam
[2] https://etherpad.openstack.org/p/nova-bugs-team
[3] http://markuszoeller.github.io/posts/2015/12/01/openstack-bugs/
[4] 
https://wiki.openstack.org/wiki/Nova/BugTriage#Weekly_bug_skimming_duty

Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-03-07 Thread Hongbin Lu


From: Corey O'Brien [mailto:coreypobr...@gmail.com]
Sent: March-07-16 8:11 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple 
OS distro

Hongbin, I think the offer to support different OS options is a perfect example 
both of what we want and what we don't want. We definitely want to allow for 
someone like yourself to maintain templates for whatever OS they want and to 
have that option be easily integrated in to a Magnum deployment. However, when 
developing features or bug fixes, we can't wait for you to have time to add it 
for whatever OS you are promising to maintain.
It might be true that supporting additional OS could slow down the development 
speed, but the key question is how much the impact will be. Does it outweigh 
the benefits? IMO, the impact doesn’t seem to be significant, given the fact 
that most features and bug fixes are OS agnostic. Also, keep in mind that every 
features we introduced (variety of COEs, variety of Nova virt-driver, variety 
of network driver, variety of volume driver, variety of …) incurs a maintenance 
overhead. If you want an optimal development speed, we will be limited to 
support a single COE/virt driver/network driver/volume driver. I guess that is 
not the direction we like to be?

Instead, we would all be forced to develop the feature for that OS as well. If 
every member of the team had a special OS like that we'd all have to maintain 
all of them.
To be clear, I don’t have a special OS, I guess neither do others who disagreed 
in this thread.

Alternatively, what was agreed on by most at the midcycle was that if someone 
like yourself wanted to support a specific OS option, we would have an easy 
place for those contributions to go without impacting the rest of the team. The 
team as a whole would agree to develop all features for at least the reference 
OS.
Could we re-confirm that this is a team agreement? There is no harm to 
re-confirm it in the design summit/ML/team meeting. Frankly, it doesn’t seem to 
be.

Then individuals or companies who are passionate about an alternative OS can 
develop the features for that OS.

Corey

On Sat, Mar 5, 2016 at 12:30 AM Hongbin Lu 
> wrote:


From: Adrian Otto 
[mailto:adrian.o...@rackspace.com]
Sent: March-04-16 6:31 PM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple 
OS distro

Steve,

On Mar 4, 2016, at 2:41 PM, Steven Dake (stdake) 
> wrote:

From: Adrian Otto >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Friday, March 4, 2016 at 12:48 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple 
OS distro

Hongbin,

To be clear, this pursuit is not about what OS options cloud operators can 
select. We will be offering a method of choice. It has to do with what we plan 
to build comprehensive testing for,
This is easy. Once we build comprehensive tests for the first OS, just re-run 
it for other OS(s).

and the implications that has on our pace of feature development. My guidance 
here is that we resist the temptation to create a system with more permutations 
than we can possibly support. The relation between bay node OS, Heat Template, 
Heat Template parameters, COE, and COE dependencies (could-init, docker, 
flannel, etcd, etc.) are multiplicative in nature. From the mid cycle, it was 
clear to me that:

1) We want to test at least one OS per COE from end-to-end with comprehensive 
functional tests.
2) We want to offer clear and precise integration points to allow cloud 
operators to substitute their own OS in place of whatever one is the default 
for the given COE.

A COE shouldn’t have a default necessarily that locks out other defaults.  
Magnum devs are the experts in how these systems operate, and as such need to 
take on the responsibility of the implementation for multi-os support.

3) We want to control the total number of configuration permutations to 
simplify our efforts as a project. We agreed that gate testing all possible 
permutations is intractable.

I disagree with this point, but I don't have the bandwidth available to prove 
it ;)

That’s exactly my point. It takes a chunk of human bandwidth to carry that 
responsibility. If we had a system engineer assigned from each of the various 
upstream OS distros working with Magnum, this would not be a big deal. 
Expecting our current contributors to support a variety of OS variants is not 

[openstack-dev] [Ceilometer] Doubt about data recollection

2016-03-07 Thread Alberto Gutiérrez Torre
Hello,

Istarted working with OpenStack's ceilometer recently and I have some
questions that I can not resolve with the documentation I have found.

I would like to have the telemetry of a node inside the node itself (for
example, a virtual machine that resides in that node and processes the
local information in some way) without passing through the global scope
(Ceilometer API with the default settings). Is it possible? I have read
about agent plugins and it may be a way to approach to the solution
(http://docs.openstack.org/developer/ceilometer/contributing/plugins.html).

Best regards,
Alberto.

WARNING / LEGAL TEXT: This message is intended only for the use of the
individual or entity to which it is addressed and may contain
information which is privileged, confidential, proprietary, or exempt
from disclosure under applicable law. If you are not the intended
recipient or the person responsible for delivering the message to the
intended recipient, you are strictly prohibited from disclosing,
distributing, copying, or in any way using this message. If you have
received this communication in error, please notify the sender and
destroy and delete any copies you may have received.

http://www.bsc.es/disclaimer

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do we need a microversion to return a 501 from GET os-virtual-interfaces?

2016-03-07 Thread Ken'ichi Ohmichi
2016-03-06 15:41 GMT-08:00 Jay Pipes :
> On 03/06/2016 02:21 PM, Matt Riedemann wrote:
>>
>> Thanks, good point. I think 400 would be best here. And according to our
>> handy-dandy docs [1] it should be OK to do this without a microversion.
>
>
> Yup, 400 is correct and yes, you should not need a new microversion addition
> to the API.

+1 for unnecessary microversion bumping.

Thanks
Ken Ohmichi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How to fix the wrong usage of 'format' jsonschema keyword in server create API

2016-03-07 Thread Ken'ichi Ohmichi
Hi Alex,

Thanks for pointing this up.

2016-03-06 22:18 GMT-08:00 Alex Xu :
>
> Due to this regression bug https://launchpad.net/bugs/1552888, found we are
> using 'formart' jsonschema keyword in wrong way. The 'format' is not only
> for string instance, but all the types.
>
> So checking the server create API, we have problem at
> https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/schemas/servers.py#L38.
> The original purpose of jsonschema want to allow 'null' value by the 'type'
> keywordl. Actually the 'null' value isn't allowed, as the 'uuid' format will
> return fault for 'null' value.
>
> So what we need to do for this:
>
> Option1: Fix it with microversion
> Option2: Fix it without microversion
> Option3: Keep it as 'null' isn't allowed.
>
> If we choice to fix it, I think we need Microversion, due to this bug is
> introduced before v2.1 API release.
>
> But I prefer to not fix it, as we said v2.1 strict the API input. And 'null'
> isn't valuable valid value(If port isn't provided by user, user can just
> ignore this parameter.). I think we can just disable it, keep this current
> behaviour as the v2.1 API behaviour.
>
> What would you think?

I believe we don't need to bump a new microversion for this situation.
We have already fixed this kind of regression from v2.0 API *without*
a new microversion.

https://review.openstack.org/#/c/259247/

Thanks
Ken Ohmichi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] exception for backporting upgrades to liberty/stable

2016-03-07 Thread Michał Jastrzębski
Upgrades are required, true, but not necessarily automatic ones.
People still can rebuild and redeploy containers using normal deploy.
It will be downtime causing and less optimal, but possible. Also with
backport of named volumes it won't be data-destroying. It will cause
total downtime of APIs, but well, it's first version.

So I'm -1 to porting it to 1.1.0.

But I would suggest another option, namely 1.2.0 with automatic
upgrades we have now. It will allow upgrade 1.1.0->1.2.0 and it will
not add more work to 1.1.0 which we need asap (we need it well tested
by Austin summt AT MOST). Adding upgrades might make it tight,
especially that infra upgrades aren't finished yet in master.

Cheers,
Michal

On 7 March 2016 at 10:00, Sam Yaple  wrote:
> On Mon, Mar 7, 2016 at 3:03 PM, Steven Dake (stdake) 
> wrote:
>>
>> Hi folks,
>>
>> It was never really discussed if we would back-port upgrades to liberty.
>> This came up during  an irc conversation Friday [1], and a vote was
>> requested.  Tthe facts of the discussion distilled are:
>>
>> We never agreed as a group to do back-port of upgrade during our back-port
>> discussion
>> An operator that can't upgrade her Z version of Kolla (1.1.1 from 1.1.0)
>> is stuck without CVE or OSSA corrections.
>> Because of a lack of security upgrades, the individual responsible for
>> executing the back-port would abandon the work (but not tuse the abaondon
>> feature for of gerrit for changes already in the queue)
>>
>> Since we never agreed, in that IRC discussion a vote was requested, and I
>> am administering the vote.  The vote request was specifically should we have
>> a back-port of upgrade in 1.1.0.  Both parties agreed they would live with
>> the results.
>>
>> I would like to point out that not porting upgrades means that the liberty
>> branch would essentially become abandoned unless some other brave soul takes
>> up a backport.  I would also like to point out that that is yet another
>> exception much like thin-containers back-port which was accepted.  See how
>> exceptions become the way to the dark side.  We really need to stay
>> exception free going forward (in Mitaka and later) as much as possible to
>> prevent expectations that we will make exceptions when none should be made.
>>
>
> This is not an exception. This was always a requirement. If you disagree
> with that then we have never actually had a stable branch. The fact is we
> _always_ needed z version upgrades for Kolla. It was _always_ the plan to
> have them. Feel free to reference the IRC logs and our prior mid-cycle and
> our Tokyo upgrade sessions. What changed was time and peoples memories and
> that's why this is even a conversation.
>
>>
>> Please vote +1 (backport) or –1 (don’t backport).  An abstain in this case
>> is the same as voting –1, so please vote either way.  I will leave the
>> voting open for 1 week until April 14th.  If there I a majority in favor, I
>> will close voting early.  We currently require 6 votes for majority as our
>> core team consists of 11 people.
>>
>> Regards,
>> -steve
>>
>>
>> [1]
>> http://eavesdrop.openstack.org/irclogs/%23kolla/%23kolla.2016-03-04.log.html#t2016-03-04T18:23:26
>>
>> Warning [1] was a pretty heated argument and there may have been some
>> swearing :)
>>
>> voting.
>>
>> "Should we back-port upgrades
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> Obviously I am +1 on committing to a stable _stable_ branch. And that has
> always required Z upgrades. Luckily the work we did in master is also usable
> for z upgrades. Without the ability to perform an update after a
> vulnerability, we have to stable branch.
>
> I would remind every one we _did_ have this conversation in Tokyo when we
> discussed pinning to stable versions of other projects rather than using
> head of thier stable branch. We currently do that and there is a review for
> a tool to make that even easier to maintain [1]. There was even talk of a
> bot to purpose these bumps in versions. That means z upgrades were expected
> for Liberty. Anyone thinking otherwise has a short memory.
>
> [1] https://review.openstack.org/#/c/248481/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs] Fwd: [Neutron][LBaaS][Octavia][Docs] Need experienced contributor documentation best-practices and how-tos

2016-03-07 Thread Hayes, Graham
On 07/03/2016 16:00, Matt Kassawara wrote:
> Snipping a lot, because I'm particularly interested in one comment...
>
>
> I agree that many developers (especially developers of those groups
> new to the big tent) seem to think that they need to be on the top
> level docs.openstack.org , without
> necessarily understanding that docs.openstack.org/developer
>  is usually the more
> appropriate place, at  least to begin with. This should probably be
> made more explicit both in the Infra Manual, plus anywhere that
> Foundation is discussing big tent inclusion.
>
>
> We tell operators and users to look at the documentation on
> docs.openstack.org  because the documentation
> in /developer is aimed at developers and often lacks polish. Now we're
> telling developers to put operator/user documentation into /developer?

To follow up on that - the Foundations "Project Navigator" has one of
the maturity requirements as "Is there an install guide for this
project guide (at docs.openstack.org)?"

If this is required, how can projects get content in to this.

I went looking about 6 months ago for the information that Stephen
asked for to update (create) our docs, but couldn't find it.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Migration shared storage negotiation

2016-03-07 Thread Chris Friesen

On 03/07/2016 09:23 AM, Matthew Booth wrote:

Timofey has posted a patch which aims to improve shared storage negotiation a
little:

https://review.openstack.org/#/c/286745/

I've been thinking for some time about going bigger. It occurs to me that the
destination node has enough context to know what should be present. I think it
would be cleaner for the destination to check what it has, and send the delta
back to the source in migration_data. I think that would remove the need for
shared storage negotiation entirely. Of course, we'd need to be careful about
cleaning up after ourselves, but that's already true.


Sounds interesting if you can make it work reliably.

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Proposal: changes to our current testing process

2016-03-07 Thread Knight, Clinton


On 3/7/16, 10:45 AM, "Eric Harney"  wrote:

>On 03/06/2016 09:35 PM, John Griffith wrote:
>> On Sat, Mar 5, 2016 at 4:27 PM, Jay S. Bryant
>>>> wrote:
>> 
>>> Ivan,
>>>
>>> I agree that our testing needs improvement.  Thanks for starting this
>>> thread.
>>>
>>> With regards to adding a hacking check for tests that run too long ...
>>>are
>>> you thinking that we would have a timer that checks or long running
>>>jobs or
>>> something that checks for long sleeps in the testing code?  Just
>>>curious
>>> your ideas for tackling that situation.  Would be interested in helping
>>> with that, perhaps.
>>>
>>> Thanks!
>>> Jay
>>>
>>>
>>> On 03/02/2016 05:25 AM, Ivan Kolodyazhny wrote:
>>>
>>> Hi Team,
>>>
>>> Here are my thoughts and proposals how to make Cinder testing process
>>> better. I won't cover "3rd party CI's" topic here. I will share my
>>>opinion
>>> about current and feature jobs.
>>>
>>>
>>> Unit-tests
>>>
>>>- Long-running tests. I hope, everybody will agree that unit-tests
>>>must be quite simple and very fast. Unit tests which takes more
>>>than 3-5
>>>seconds should be refactored and/or moved to 'integration' tests.
>>>Thanks to Tom Barron for several fixes like [1]. IMO, we it would be
>>>good to have some hacking checks to prevent such issues in a future.
>>>
>>>- Tests coverage. We don't check it in an automatic way on gates.
>>>Usually, we require to add some unit-tests during code review
>>>process. Why
>>>can't we add coverage job to our CI and do not merge new patches,
>>>with
>>>will decrease tests coverage rate? Maybe, such job could be voting
>>>in a
>>>future to not ignore it. For now, there is not simple way to check
>>>coverage
>>>because 'tox -e cover' output is not useful [2].

The Manila project has a coverage job that may be of interest to Cinder.
It’s not perfect, because sometimes the periodic loopingcall routines run
during the test run and sometimes not, leading to false negatives.  But
most of the time it’s a handy confirmation that the unit test coverage
didn’t decline due to a patch.  Look at the manila-coverage job in this
example:  https://review.openstack.org/#/c/287575/

>>>
>>>
>>> Functional tests for Cinder
>>>
>>> We introduced some functional tests last month [3]. Here is a patch to
>>> infra to add new job [4]. Because these tests were moved from
>>>unit-tests, I
>>> think we're OK to make this job voting. Such tests should not be a
>>> replacement for Tempest. They even could tests Cinder with Fake Driver
>>>to
>>> make it faster and not related on storage backends issues.
>>>
>>>
>>> Tempest in-tree tests
>>>
>>> Sean started work on it [5] and I think it's a good idea to get them in
>>> Cinder repo to run them on Tempest jobs and 3-rd party CIs against a
>>>real
>>> backend.
>>>
>>>
>>> Functional tests for python-brick-cinderclient-ext
>>>
>>> There are patches that introduces functional tests [6] and new job [7].
>>>
>>>
>>> Functional tests for python-cinderclient
>>>
>>> We've got a very limited set of such tests and non-voting job. IMO, we
>>>can
>>> run them even with Cinder Fake Driver to make them not depended on a
>>> storage backend and make it faster. I believe, we can make this job
>>>voting
>>> soon. Also, we need more contributors to this kind of tests.
>>>
>>>
>>> Integrated tests for python-cinderclient
>>>
>>> We need such tests to make sure that we won't break Nova, Heat or other
>>> python-cinderclient consumers with a next merged patch. There is a
>>>thread
>>> in openstack-dev ML about such tests [8] and proposal [9] to introduce
>>>them
>>> to python-cinderclient.
>>>
>>>
>>> Rally tests
>>>
>>> IMO, it would be good to have new Rally scenarios for every patches
>>>like
>>> 'improves performance', 'fixes concurrency issues', etc. Even if we as
>>>a
>>> Cinder community don't have enough time to implement them, we have to
>>>ask
>>> for them in reviews, openstack-dev ML, file Rally bugs and blueprints
>>>if
>>> needed.
>>>
>>>
>>> [1] https://review.openstack.org/#/c/282861/
>>> [2] http://paste.openstack.org/show/488925/
>>> [3] https://review.openstack.org/#/c/267801/
>>> [4] https://review.openstack.org/#/c/287115/
>>> [5] https://review.openstack.org/#/c/274471/
>>> [6] https://review.openstack.org/#/c/265811/
>>> [7] https://review.openstack.org/#/c/265925/
>>> [8]
>>> 
>>>http://lists.openstack.org/pipermail/openstack-dev/2016-March/088027.htm
>>>l
>>> [9] https://review.openstack.org/#/c/279432/
>>>
>>>
>>> Regards,
>>> Ivan Kolodyazhny,
>>> http://blog.e0ne.info/
>>>
>>>
>>>
>>> ​We could just parse out the tox slowest tests output we already have.
>>> Do
>> something like pylint where we look at existing/current slowest test and
>> balk if that's exceeded.
>> 
>> Thoughts?
>> 
>> John​
>> 
>
>I'm not really sure that writing a "hacking" check for this is a
>worthwhile investment.  (It's not a hacking check really, but something

Re: [openstack-dev] [kolla][vote] exception for backporting upgrades to liberty/stable

2016-03-07 Thread Sam Yaple
On Mon, Mar 7, 2016 at 3:03 PM, Steven Dake (stdake) 
wrote:

> Hi folks,
>
> It was never really discussed if we would back-port upgrades to liberty.
> This came up during  an irc conversation Friday [1], and a vote was
> requested.  Tthe facts of the discussion distilled are:
>
>- We never agreed as a group to do back-port of upgrade during our
>back-port discussion
>- An operator that can't upgrade her Z version of Kolla (1.1.1 from
>1.1.0) is stuck without CVE or OSSA corrections.
>- Because of a lack of security upgrades, the individual responsible
>for executing the back-port would abandon the work (but not tuse the
>abaondon feature for of gerrit for changes already in the queue)
>
> Since we never agreed, in that IRC discussion a vote was requested, and I
> am administering the vote.  The vote request was specifically should we
> have a back-port of upgrade in 1.1.0.  Both parties agreed they would live
> with the results.
>
> I would like to point out that not porting upgrades means that the liberty
> branch would essentially become abandoned unless some other brave soul
> takes up a backport.  I would also like to point out that that is yet
> another exception much like thin-containers back-port which was accepted.
> See how exceptions become the way to the dark side.  We really need to stay
> exception free going forward (in Mitaka and later) as much as possible to
> prevent expectations that we will make exceptions when none should be made.
>
>
This is not an exception. This was always a requirement. If you disagree
with that then we have never actually had a stable branch. The fact is we
_always_ needed z version upgrades for Kolla. It was _always_ the plan to
have them. Feel free to reference the IRC logs and our prior mid-cycle and
our Tokyo upgrade sessions. What changed was time and peoples memories and
that's why this is even a conversation.


> Please vote +1 (backport) or –1 (don’t backport).  An abstain in this case
> is the same as voting –1, so please vote either way.  I will leave the
> voting open for 1 week until April 14th.  If there I a majority in favor, I
> will close voting early.  We currently require 6 votes for majority as our
> core team consists of 11 people.
>
> Regards,
> -steve
>
>
> [1]
> http://eavesdrop.openstack.org/irclogs/%23kolla/%23kolla.2016-03-04.log.html#t2016-03-04T18:23:26
>
> Warning [1] was a pretty heated argument and there may have been some
> swearing :)
>
> voting.
>
> "Should we back-port upgrades
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
Obviously I am +1 on committing to a stable _*stable_* branch. And that has
always required Z upgrades. Luckily the work we did in master is also
usable for z upgrades. Without the ability to perform an update after a
vulnerability, we have to stable branch.

I would remind every one we _did_ have this conversation in Tokyo when we
discussed pinning to stable versions of other projects rather than using
head of thier stable branch. We currently do that and there is a review for
a tool to make that even easier to maintain [1]. There was even talk of a
bot to purpose these bumps in versions. That means z upgrades were expected
for Liberty. Anyone thinking otherwise has a short memory.

[1] https://review.openstack.org/#/c/248481/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] The stable:follows-policy tag is official, projects need to start applying for it

2016-03-07 Thread Matt Riedemann
The stable:follows-policy tag is now official in the governance repo 
[1]. Projects that follow the stable branch policy [2] should start 
applying for that tag.


Note that the tag may be used to determine when a projects stable 
branches go end of life, so it's important to have this information, 
especially as we head into the Newton summit where we'll be discussing 
the end of life date for stable/kilo.


[1] 
http://governance.openstack.org/reference/tags/stable_follows-policy.html#tag-application-process

[2] http://docs.openstack.org/project-team-guide/stable-branches.html

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Limits on volume read throughput?

2016-03-07 Thread Chris Friesen
Just a heads-up that the 3.10 kernel in CentOS/RHEL is *not* a stock 3.10 
kernel.  It has had many things backported from later kernels, though they may 
not have backported the specific improvements you're looking for.


I think CentOS is using qemu 2.3, which is pretty new.  Not sure how new their 
libiscsi is though.


Chris

On 03/07/2016 12:25 AM, Preston L. Bannister wrote:

Should add that the physical host of the moment is Centos 7 with a packstack
install of OpenStack. The instance is Ubuntu Trusty. Centos 7 has a relatively
old 3.10 Linux kernel.

 From the last week (or so) of digging, I found there were substantial claimed
improvements in /both/ flash support in Linux and the block I/O path in QEMU -
in more recent versions. How much that impacts the current measures, I do not
(yet) know.

Which suggests a bit of tension. Redhat folk are behind much of these
improvements, but RHEL (and Centos) are rather far behind. Existing RHEL
customers want and need careful, conservative changes. Folk deploying OpenStack
need more aggressive release content, for which Ubuntu is currently the best 
base.

Will we see a "Redhat Cloud Base" as an offering with RHEL support levels, and
more aggressive QEMU and Linux kernel inclusion?

At least for now, building OpenStack clouds on Ubuntu might be a much better 
bet.


Are those claimed improvements in QEMU and the Linux kernel going to make a
difference in my measured result? I do not know. Still reading, building tests,
and collecting measures...




On Thu, Mar 3, 2016 at 11:28 AM, Chris Friesen > wrote:

On 03/03/2016 01:13 PM, Preston L. Bannister wrote:

 > Scanning the same volume from within the instance still gets the 
same
 > ~450MB/s that I saw before.

 Hmmm, with iSCSI inbetween that could be the TCP memcpy limitation.


Measuring iSCSI in isolation is next on my list. Both on the physical
host, and
in the instance. (Now to find that link to the iSCSI test, again...)


Based on earlier comments it appears that you're using the qemu built-in
iSCSI initiator.

Assuming that's the case, maybe it would make sense to do a test run with
the in-kernel iSCSI code and take qemu out of the picture?

Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs] Fwd: [Neutron][LBaaS][Octavia][Docs] Need experienced contributor documentation best-practices and how-tos

2016-03-07 Thread Matt Kassawara
Snipping a lot, because I'm particularly interested in one comment...


> I agree that many developers (especially developers of those groups new to
> the big tent) seem to think that they need to be on the top level
> docs.openstack.org, without necessarily understanding that
> docs.openstack.org/developer is usually the more appropriate place, at
> least to begin with. This should probably be made more explicit both in the
> Infra Manual, plus anywhere that Foundation is discussing big tent
> inclusion.


We tell operators and users to look at the documentation on
docs.openstack.org because the documentation in /developer is aimed at
developers and often lacks polish. Now we're telling developers to put
operator/user documentation into /developer?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Proposal: changes to our current testing process

2016-03-07 Thread Duncan Thomas
On 7 March 2016 at 23:45, Eric Harney  wrote:

>
>
> I'm not really sure that writing a "hacking" check for this is a
> worthwhile investment.  (It's not a hacking check really, but something
> more like what you're describing, but that's beside the point.)
>
> We should just be looking for large, complex unit tests in review, and
> the ones that we already have should be moving towards the functional
> test area anyway.
>
> So what would the objective here be exactly?
>
>
Complexity can be tricky to spot by hand, and expecting reviewers to get it
right all of the time is not a reasonable expectation.

My ideal would be something that processes the commit and the jenkins logs,
extracts the timing info of any new tests, and if they are outside some
(fairly tight) window, then posts a comment to the review indicating that
these tests should get closer scrutiny. This does not remove reviewer
judgement from the equation, just provides a helpful prod that there's
something to be considered.

-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Fuel-web] Jenkins failing, tox -e py27 error

2016-03-07 Thread Jeremy Stanley
On 2016-03-07 16:50:00 +0200 (+0200), Igor Kalnitsky wrote:
> I got it and I'm working on patch [1] that must solve it. I simply
> stop doing postre setup since it seems is already setup.
[...]

Great! I'd still like to figure out why the job suddenly started
assuming the role was missing on the new workers (or why it started
failing to be able to recreate it). It's possible we're now granting
more generous permissions to the openstack_citest user than in the
past, but skimming your job it doesn't look like that should be the
cause. Updating env.sh to set -x might make this easier to debug.
I'll move followup comments to your
https://review.openstack.org/289278 review.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Openstack] Instalation Problem:VBoxManage: error: Guest not running [ubuntu14.04]

2016-03-07 Thread Aleksey Zvyagintsev
Hello,
that definitely about HDD. Create disk with at least 50Gb +

On Mon, Mar 7, 2016 at 5:32 PM, Samer Machara <
samer.mach...@telecom-sudparis.eu> wrote:

> Hello,
>YES, I just find the solution, the Virtualization Option on the BIOS
> was Disable by default, I turn it on and It's working now.
> Now my problem are the resources, But  that's another story. JEJEJE
>
> I'm not sure if I need RAM or HD.
>
>
>
> Thanks for your help.
>
> --
> *De: *"Igor Marnat" 
> *À: *"Maksim Malchuk" 
> *Cc: *"Samer Machara" , "OpenStack
> Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> *Envoyé: *Lundi 7 Mars 2016 11:21:42
>
> *Objet: *Re: [openstack-dev] [Fuel] [Openstack] Instalation
> Problem:VBoxManage: error: Guest not running [ubuntu14.04]
>
> Samer,
> did you make any progress?
>
> If not yet, I have couple of questions:
> - Did you download MOS image and VBox scripts from
> https://docs.fuel-infra.org/openstack/fuel/fuel-8.0/quickstart-guide.html#downloading-the-mirantis-openstack-image
> ?
>
> - Can you login to your just deployed master node?
>
> If you can, could you please send last 50-70 strings of the file
> /var/log/puppet/bootstrap_admin_node.log from your master node?
>
> Regards,
> Igor Marnat
>
> On Fri, Mar 4, 2016 at 11:20 PM, Maksim Malchuk 
> wrote:
>
>> Samer, please address my recommendations.
>>
>>
>> On Fri, Mar 4, 2016 at 7:49 PM, Samer Machara <
>> samer.mach...@telecom-sudparis.eu> wrote:
>>
>>> Hi, Igor
>>>   Thanks for answer so quickly.
>>>
>>> I wait until the following message appears
>>> Installation timed out! (3000 seconds)
>>> I don't have any virtual machines created.
>>>
>>> I update to 5.0 VirtualBox version, Now I got the following message
>>>
>>> VBoxManage: error: Machine 'fuel-master' is not currently running
>>>  Waiting for product VM to download files. Please do NOT abort the
>>> script...
>>>
>>> I'm still waiting
>>>
>>> --
>>> *De: *"Maksim Malchuk" 
>>> *À: *"OpenStack Development Mailing List (not for usage questions)" <
>>> openstack-dev@lists.openstack.org>
>>> *Envoyé: *Vendredi 4 Mars 2016 15:19:54
>>> *Objet: *Re: [openstack-dev] [Fuel] [Openstack] Instalation
>>> Problem:VBoxManage: error: Guest not running [ubuntu14.04]
>>>
>>>
>>> Igor,
>>>
>>> Some information about my system:
>>> OS: ubuntu 14.04 LTS
>>> Memory: 3,8GiB
>>>
>>> Samer can't run many guests I think.
>>>
>>>
>>> On Fri, Mar 4, 2016 at 5:12 PM, Igor Marnat 
>>> wrote:
>>>
 Samer, Maksim,
 I'd rather say that script started fuel-master already (VM
 "fuel-master" has been successfully started.), didn't find running guests,
 (VBoxManage: error: Guest not running) but it can try to start them
 afterwards.

 Samer,
 - how many VMs are there running besides fuel-master?
 - is it still showing "Waiting for product VM to download files. Please
 do NOT abort the script..." ?
 - for how long did you wait since the message above?


 Regards,
 Igor Marnat

 On Fri, Mar 4, 2016 at 5:04 PM, Maksim Malchuk 
 wrote:

> Hi Sames,
>
> *VBoxManage: error: Guest not running*
>
> looks line the problem with VirtualBox itself or settings for the
> 'fuel-master' VM, it can't boot it.
> Open the VirtualBox Manger (GUI), select the 'fuel-master' VM and
> start it manually - it should show you what is exactly happens.
>
>
> On Fri, Mar 4, 2016 at 4:41 PM, Samer Machara <
> samer.mach...@telecom-sudparis.eu> wrote:
>
>> Hello, everyone.
>> I'm new with Fuel. I'm trying to follow the QuickStart Guide (
>> https://docs.fuel-infra.org/openstack/fuel/fuel-8.0/quickstart-guide.html),
>> but I have the following Error:
>>
>>
>> *Waiting for VM "fuel-master" to power on...*
>> *VM "fuel-master" has been successfully started.*
>> *VBoxManage: error: Guest not running*
>> *VBoxManage: error: Guest not running*
>> ...
>> *VBoxManage: error: Guest not running*
>> *Waiting for product VM to download files. Please do NOT abort the
>> script...*
>>
>>
>>
>> I hope you can help me.
>>
>> Thanks in advance
>>
>>
>> Some information about my system:
>> OS: ubuntu 14.04 LTS
>> Memory: 3,8GiB
>> Processor: Intel® Core™2 Quad CPU Q9550 @ 2.83GHz × 4
>> OS type: 64-bit
>> Disk 140,2GB
>> VirtualBox Version: 4.3.36_Ubuntu
>> Checking for 'expect'... OK
>> Checking for 'xxd'... OK
>> Checking for "VBoxManage"... OK
>> Checking for VirtualBox Extension Pack... OK
>> Checking if SSH client installed... OK
>> Checking if ipconfig or ifconfig installed... OK
>>
>>
>> I modify the config.sh to adapt 

Re: [openstack-dev] [kolla][vote] Proposing Alicja Kwasniewska for core reviewer

2016-03-07 Thread Sam Yaple
+1 Keep up the great reviews and patches!

Sam Yaple

On Mon, Mar 7, 2016 at 3:41 PM, Jeff Peeler  wrote:

> +1
>
> On Mon, Mar 7, 2016 at 3:57 AM, Michal Rostecki 
> wrote:
> > +1
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Dealing with nonexistent resources during resource-list / stack-delete

2016-03-07 Thread Zane Bitter

On 04/03/16 04:35, Johannes Grassler wrote:

Hello,

I am currently looking into https://bugs.launchpad.net/heat/+bug/1442121
and
not quite sure how to tackle it, since the problematic code is used for
lots of
things[0]: the root cause of the problem are API clients in resource
plugins
that do not anticipate a resource with an entry in Heat's database
having been
deleted in the implementing service's database[1]. Here's an example:

https://github.com/openstack/heat/blob/e4dc942ce1a8c79b450345c7afae326c80d8a5d6/heat/engine/resources/openstack/neutron/floatingip.py#L179


If that happens[1] an uncaught exception will be thrown that among other
things
breaks the very operations one would need for cleaning up the mess.

As far as I can see it, the cleanest way would be to go through all
resources
with a fine comb and add exception handling to the API calls in the
add_dependencies() method where it is missing (just return False for any
resource that no longer exists). Or is there a better way?


Yes, you're right and this sucks. That's not the only problem we've had 
in this area recently - for example there was also:


https://bugs.launchpad.net/heat/+bug/1536515.

The fact that we have to have these hacked in implicit dependencies at 
all is terrible, but we really need to make sure they can't break basic 
operations like loading a stack from the DB so we can show or delete it. 
The ideal would be:


* We can guarantee to catch all (non-exit) exceptions, no matter what 
kind of crazy stuff people write in add_dependencies()
* The plugin developer doesn't have to do anything special to get this 
behaviour
* The code remains backwards compatible with any third-party resource 
plugins circulating out there
* We always add as many dependencies as possible (i.e. all 
non-exception-raising dependencies are added)
* Genuine dependency problems (e.g. non-existent target of 
get_resource/get_attr) are still surfaced, preferably on CREATE only


I'm pretty sure getting all of those is impossible. I'd be very 
interested in evaluating different tradeoffs we could make among them 
though.


In the meantime, we need to find and squash every instance of this 
problem wherever we can like you said.


cheers,
Zane.



Cheers,

Johannes

Footnotes:

[0] Whenever a stack's resources are being listed using
 heat.engine.service.list_stack_resources(). resource-list and
stack-delete,
 all invoke list_stack_resources()). stack-abandon does so
indirectly (it
 appears to trigger stack-delete judging by the log, but it yields the
 desired output, at least in Liberty). These are just the ones I
tested,
 there are probably more.

[1] It can happen for a number of reasons, either due to resource
dependency
 problems upon stack-delete as it happened in the original bug
report or due
 to an operator accidently deleting resources that are managed by Heat.





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Proposal: changes to our current testing process

2016-03-07 Thread Eric Harney
On 03/06/2016 09:35 PM, John Griffith wrote:
> On Sat, Mar 5, 2016 at 4:27 PM, Jay S. Bryant > wrote:
> 
>> Ivan,
>>
>> I agree that our testing needs improvement.  Thanks for starting this
>> thread.
>>
>> With regards to adding a hacking check for tests that run too long ... are
>> you thinking that we would have a timer that checks or long running jobs or
>> something that checks for long sleeps in the testing code?  Just curious
>> your ideas for tackling that situation.  Would be interested in helping
>> with that, perhaps.
>>
>> Thanks!
>> Jay
>>
>>
>> On 03/02/2016 05:25 AM, Ivan Kolodyazhny wrote:
>>
>> Hi Team,
>>
>> Here are my thoughts and proposals how to make Cinder testing process
>> better. I won't cover "3rd party CI's" topic here. I will share my opinion
>> about current and feature jobs.
>>
>>
>> Unit-tests
>>
>>- Long-running tests. I hope, everybody will agree that unit-tests
>>must be quite simple and very fast. Unit tests which takes more than 3-5
>>seconds should be refactored and/or moved to 'integration' tests.
>>Thanks to Tom Barron for several fixes like [1]. IMO, we it would be
>>good to have some hacking checks to prevent such issues in a future.
>>
>>- Tests coverage. We don't check it in an automatic way on gates.
>>Usually, we require to add some unit-tests during code review process. Why
>>can't we add coverage job to our CI and do not merge new patches, with
>>will decrease tests coverage rate? Maybe, such job could be voting in a
>>future to not ignore it. For now, there is not simple way to check 
>> coverage
>>because 'tox -e cover' output is not useful [2].
>>
>>
>> Functional tests for Cinder
>>
>> We introduced some functional tests last month [3]. Here is a patch to
>> infra to add new job [4]. Because these tests were moved from unit-tests, I
>> think we're OK to make this job voting. Such tests should not be a
>> replacement for Tempest. They even could tests Cinder with Fake Driver to
>> make it faster and not related on storage backends issues.
>>
>>
>> Tempest in-tree tests
>>
>> Sean started work on it [5] and I think it's a good idea to get them in
>> Cinder repo to run them on Tempest jobs and 3-rd party CIs against a real
>> backend.
>>
>>
>> Functional tests for python-brick-cinderclient-ext
>>
>> There are patches that introduces functional tests [6] and new job [7].
>>
>>
>> Functional tests for python-cinderclient
>>
>> We've got a very limited set of such tests and non-voting job. IMO, we can
>> run them even with Cinder Fake Driver to make them not depended on a
>> storage backend and make it faster. I believe, we can make this job voting
>> soon. Also, we need more contributors to this kind of tests.
>>
>>
>> Integrated tests for python-cinderclient
>>
>> We need such tests to make sure that we won't break Nova, Heat or other
>> python-cinderclient consumers with a next merged patch. There is a thread
>> in openstack-dev ML about such tests [8] and proposal [9] to introduce them
>> to python-cinderclient.
>>
>>
>> Rally tests
>>
>> IMO, it would be good to have new Rally scenarios for every patches like
>> 'improves performance', 'fixes concurrency issues', etc. Even if we as a
>> Cinder community don't have enough time to implement them, we have to ask
>> for them in reviews, openstack-dev ML, file Rally bugs and blueprints if
>> needed.
>>
>>
>> [1] https://review.openstack.org/#/c/282861/
>> [2] http://paste.openstack.org/show/488925/
>> [3] https://review.openstack.org/#/c/267801/
>> [4] https://review.openstack.org/#/c/287115/
>> [5] https://review.openstack.org/#/c/274471/
>> [6] https://review.openstack.org/#/c/265811/
>> [7] https://review.openstack.org/#/c/265925/
>> [8]
>> http://lists.openstack.org/pipermail/openstack-dev/2016-March/088027.html
>> [9] https://review.openstack.org/#/c/279432/
>>
>>
>> Regards,
>> Ivan Kolodyazhny,
>> http://blog.e0ne.info/
>>
>>
>>
>> ​We could just parse out the tox slowest tests output we already have.  Do
> something like pylint where we look at existing/current slowest test and
> balk if that's exceeded.
> 
> Thoughts?
> 
> John​
> 

I'm not really sure that writing a "hacking" check for this is a
worthwhile investment.  (It's not a hacking check really, but something
more like what you're describing, but that's beside the point.)

We should just be looking for large, complex unit tests in review, and
the ones that we already have should be moving towards the functional
test area anyway.

So what would the objective here be exactly?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry][aodh] "aodh alarm list" vs "aodh alarm search"

2016-03-07 Thread gordon chung


On 07/03/2016 8:05 AM, Julien Danjou wrote:
> On Mon, Mar 07 2016, gordon chung wrote:
>
>> shall we drop 'alarm search' nomenclature and use just 'alarm list' for
>> both queries (standard and complex). the concern i have right now is the
>> proposal is to add standard query support to 'alarm list' while complex
>> query support is in 'alarm search'. this is very confusing especially
>> because both commands use '--query' as their option input.
>
> So you have to differentiate 2 things: the REST API and the CLI.
> You can't merge both on the REST API side, because the complex query
> system require to send a JSON object, so that requires a POST – whereas
> listing is simply done via GET. That part we can't merge together.
>
> Now, if your plan is to merge both command on the CLI side, I see no
> objection. That's probably a good UX point.
>

yeah, the proposal is to merge on the client side. let's do that, since 
it was same way we were leaning at the meeting[1].

1. drop aodh alarm search
2. add both --query and --complex-query support to aodh alarm list

[1] 
http://eavesdrop.openstack.org/meetings/telemetry/2016/telemetry.2016-03-03-15.01.log.html

cheers,
-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Proposing Alicja Kwasniewska for core reviewer

2016-03-07 Thread Jeff Peeler
+1

On Mon, Mar 7, 2016 at 3:57 AM, Michal Rostecki  wrote:
> +1
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel][plugins] Should we maintain example plugins?

2016-03-07 Thread Simon Pasquier
Yet another example [1] of why a dummy/example plugin should be integrated
in the Fuel CI process: the current version of Fuel is broken for (almost)
all plugins since a week at least and no one noticed.
Regards,
Simon

[1] https://bugs.launchpad.net/fuel/+bug/1554095

On Mon, Mar 7, 2016 at 3:16 PM, Simon Pasquier 
wrote:

> What about maintaining a dummy plugin (eg running only one or two very
> simple tasks) as a standalone project for the purpose of QA?
> IMO it would make more sense than having those example plugins in the
> fuel-plugins project...
> Regards,
> Simon
>
> On Mon, Mar 7, 2016 at 2:49 PM, Igor Kalnitsky 
> wrote:
>
>> > and really lowering barriers for people who just begin create plugins.
>>
>> Nonsense. First, people usually create them via running `fpb --create
>> plugin-name` that generates plugin boilerplate. And that boilerplate
>> won't contain that changes.
>>
>> Second, if people ain't smart enough to change few lines in
>> `metadata.yaml` of generated boilerplate to make it work with latest
>> Fuel, maybe it's better for them to do not develop plugins at all?
>>
>> On Fri, Mar 4, 2016 at 2:24 PM, Stanislaw Bogatkin
>>  wrote:
>> > +1 to maintain example plugins. It is easy enough and really lowering
>> > barriers for people who just begin create plugins.
>> >
>> > On Fri, Mar 4, 2016 at 2:08 PM, Matthew Mosesohn <
>> mmoses...@mirantis.com>
>> > wrote:
>> >>
>> >> Igor,
>> >>
>> >> It seems you are proposing an IKEA approach to plugins. Take Fuel's
>> >> example plugin, add in the current Fuel release, and then build it. We
>> >> maintained these plugins in the past, but now it should a manual step
>> >> to test it out on the current release.
>> >>
>> >> What would be a more ideal situation that meets the needs of users and
>> >> QA? Right now we have failed tests until we can decide on a solution
>> >> that works for everybody.
>> >>
>> >> On Fri, Mar 4, 2016 at 1:26 PM, Igor Kalnitsky <
>> ikalnit...@mirantis.com>
>> >> wrote:
>> >> > No, this is a wrong road to go.
>> >> >
>> >> > What if in Fuel 10 we drop v1 plugins support? What should we do?
>> >> > Remove v1 example from source tree? That doesn't seem good to me.
>> >> >
>> >> > Example plugins are only examples. The list of supported releases
>> must
>> >> > be maintained on system test side, and system tests must inject that
>> >> > information into plugin's metadata.yaml and test it.
>> >> >
>> >> > Again, I don't say we shouldn't test plugins. I say, tests should be
>> >> > responsible for preparing plugins. I can say even more: tests should
>> >> > not rely on what is produced by plugins, since it's something that
>> >> > could be changed and tests start failing.
>> >> >
>> >> > On Thu, Mar 3, 2016 at 7:54 PM, Swann Croiset > >
>> >> > wrote:
>> >> >> IMHO it is important to keep plugin examples and keep testing them,
>> >> >> very
>> >> >> valuable for plugin developers.
>> >> >>
>> >> >> For example, I've encountered [0] the case where "plugin as role"
>> >> >> feature
>> >> >> wasn't easily testable with fuel-qa because not compliant with the
>> last
>> >> >> plugin data structure,
>> >> >> and more recently we've spotted a regression [1] with
>> "vip-reservation"
>> >> >> feature introduced by a change in nailgun.
>> >> >> These kind of issues are time consuming for plugin developers and
>> >> >> can/must
>> >> >> be avoided by testing them.
>> >> >>
>> >> >> I don't even understand why the question is raised while fuel
>> plugins
>> >> >> are
>> >> >> supposed to be supported and more and more used [3], even by murano
>> [4]
>> >> >> ...
>> >> >>
>> >> >> [0] https://bugs.launchpad.net/fuel/+bug/1543962
>> >> >> [1] https://bugs.launchpad.net/fuel/+bug/1551320
>> >> >> [3]
>> >> >>
>> >> >>
>> http://lists.openstack.org/pipermail/openstack-dev/2016-February/085636.html
>> >> >> [4] https://review.openstack.org/#/c/286310/
>> >> >>
>> >> >> On Thu, Mar 3, 2016 at 3:19 PM, Matthew Mosesohn
>> >> >> 
>> >> >> wrote:
>> >> >>>
>> >> >>> Hi Fuelers,
>> >> >>>
>> >> >>> I would like to bring your attention a dilemma we have here. It
>> seems
>> >> >>> that there is a dispute as to whether we should maintain the
>> releases
>> >> >>> list for example plugins[0]. In this case, this is for adding
>> version
>> >> >>> 9.0 to the list.
>> >> >>>
>> >> >>> Right now, we run a swarm test that tries to install the example
>> >> >>> plugin and do a deployment, but it's failing only for this reason.
>> I
>> >> >>> should add that this is the only automated daily test that will
>> verify
>> >> >>> that our plugin framework actually works. During the Mitaka
>> >> >>> development  cycle, we already had an extended period where plugins
>> >> >>> were broken[1]. Removing this test (or leaving it permanently red,
>> >> >>> which is effectively the same), would raise the risk to any member
>> of
>> >> >>> the 

Re: [openstack-dev] [puppet] proposal to create puppet-neutron-core and add Sergey Kolekonov

2016-03-07 Thread Emilien Macchi
I've proceeded to the change.
We created puppet-neutron core and added Sergey Kolekonov part of this
group.

Congrats Sergey!

On 03/04/2016 10:40 AM, Emilien Macchi wrote:
> Hi,
> 
> To scale-up our review process, we created pupept-keystone-core and it
> worked pretty well until now.
> 
> I propose that we continue this model and create puppet-neutron-core.
> 
> I also propose to add Sergey Kolekonov in this group.
> He's done a great job helping us to bring puppet-neutron rock-solid for
> deploying OpenStack networking.
> 
> http://stackalytics.com/?module=puppet-neutron=marks
> http://stackalytics.com/?module=puppet-neutron=commits
> 14 commits and 47 reviews, present on IRC during meetings & bug triage,
> he's always helpful. He has a very good understanding of Neutron &
> Puppet so I'm quite sure he would be a great addition.
> 
> As usual, please vote!
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] CI jobs failures

2016-03-07 Thread Derek Higgins
On 6 March 2016 at 16:58, James Slagle <james.sla...@gmail.com> wrote:
> On Sat, Mar 5, 2016 at 11:15 AM, Emilien Macchi <emil...@redhat.com> wrote:
>> I'm kind of hijacking Dan's e-mail but I would like to propose some
>> technical improvements to stop having so much CI failures.
>>
>>
>> 1/ Stop creating swap files. We don't have SSD, this is IMHO a terrible
>> mistake to swap on files because we don't have enough RAM. In my
>> experience, swaping on non-SSD disks is even worst that not having
>> enough RAM. We should stop doing that I think.
>
> We have been relying on swap in tripleo-ci for a little while. While
> not ideal, it has been an effective way to at least be able to test
> what we've been testing given the amount of physical RAM that is
> available.

Ok, so I have a few points here, in places where I'm making
assumptions I'll try to point it out

o Yes I agree using swap should be avoided if at all possible

o We are currently looking into adding more RAM to our testenv hosts,
it which point we can afford to be a little more liberal with Memory
and this problem should become less of an issue, having said that

o Even though using swap is bad, if we have some processes with a
large Mem footprint that don't require constant access to a portion of
the footprint swaping it out over the duration of the CI test isn't as
expensive as it would suggest (assuming it doesn't need to be swapped
back in and the kernel has selected good candidates to swap out)

o The test envs that host the undercloud and overcloud nodes have 64G
of RAM each, they each host 4 testenvs and each test env if running a
HA job can use up to 21G of RAM so we have over committed there, it
this is only a problem if a test env host gets 4 HA jobs that are
started around the same time (and as a result a each have 4 overcloud
nodes running at the same time), to allow this to happen without VM's
being killed by the OOM we've also enabled swap there. The majority of
the time this swap isn't in use, only if all 4 testenvs are being
simultaneously used and they are all running the second half of a CI
test at the same time.

o The overcloud nodes are VM's running with a "unsafe" disk caching
mechanism, this causes sync requests from guest to be ignored and as a
result if the instances being hosted on these nodes are going into
swap this swap will be cached on the host as long as RAM is available.
i.e. swap being used in the undercloud or overcloud isn't being synced
to the disk on the host unless it has to be.

o What I'd like us to avoid is simply bumping up the memory every time
we hit a OOM error without at least
  1. Explaining why we need more memory all of a sudden
  2. Looking into a way we may be able to avoid simply bumping the RAM
(at peak times we are memory constrained)

as an example, Lets take a look at the swap usage on the undercloud of
a recent ci nonha job[1][2], These insances have 5G of RAM with 2G or
swap enabled via a swapfile
the overcloud deploy started @22:07:46 and finished at @22:28:06

In the graph you'll see a spike in memory being swapped out around
22:09, this corresponds almost exactly to when the overcloud image is
being downloaded from swift[3], looking the top output at the end of
the test you'll see that swift-proxy is using over 500M of Mem[4].

I'd much prefer we spend time looking into why the swift proxy is
using this much memory rather then blindly bump the memory allocated
to the VM, perhaps we have something configured incorrectly or we've
hit a bug in swift.

Having said all that we can bump the memory allocated to each node but
we have to accept 1 of 2 possible consequences
1. We'll env up using the swap on the testenv hosts more then we
currently are or
2. We'll have to reduce the number of test envs per host from 4 down
to 3, wiping 25% of our capacity

[1] - 
http://logs.openstack.org/85/289085/2/check-tripleo/gate-tripleo-ci-f22-nonha/6fda33c/
[2] - http://goodsquishy.com/downloads/20160307/swap.png
[3] - 22:09:03 21678 INFO [-] Master cache miss for image
b6a96213-7955-4c4d-829e-871350939e03, starting download
  22:09:41 21678 DEBUG [-] Running cmd (subprocess): qemu-img info
/var/lib/ironic/master_images/tmpvjAlCU/b6a96213-7955-4c4d-829e-871350939e03.part
[4] - 17690 swift 20   0  804824 547724   1780 S   0.0 10.8
0:04.82 swift-prox+


>
> The recent change to add swap to the overcloud nodes has proved to be
> unstable. But that has more to do with it being racey with the
> validation deployment afaict. There are some patches currently up to
> address those issues.
>
>>
>>
>> 2/ Split CI jobs in scenarios.
>>
>> Currently we have CI jobs for ceph, HA, non-ha, containers and the
>> current situation is that jobs fail randomly, due to performances issues.

We don't know it due to performance issues, Your probably correct that
we wouldn't

[openstack-dev] [nova] Migration shared storage negotiation

2016-03-07 Thread Matthew Booth
Timofey has posted a patch which aims to improve shared storage negotiation
a little:

  https://review.openstack.org/#/c/286745/

I've been thinking for some time about going bigger. It occurs to me that
the destination node has enough context to know what should be present. I
think it would be cleaner for the destination to check what it has, and
send the delta back to the source in migration_data. I think that would
remove the need for shared storage negotiation entirely. Of course, we'd
need to be careful about cleaning up after ourselves, but that's already
true.

Thoughts?

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][vote] exception for backporting upgrades to liberty/stable

2016-03-07 Thread Steven Dake (stdake)
Hi folks,

It was never really discussed if we would back-port upgrades to liberty.  This 
came up during  an irc conversation Friday [1], and a vote was requested.  Tthe 
facts of the discussion distilled are:

  *   We never agreed as a group to do back-port of upgrade during our 
back-port discussion
  *   An operator that can't upgrade her Z version of Kolla (1.1.1 from 1.1.0) 
is stuck without CVE or OSSA corrections.
  *   Because of a lack of security upgrades, the individual responsible for 
executing the back-port would abandon the work (but not tuse the abaondon 
feature for of gerrit for changes already in the queue)

Since we never agreed, in that IRC discussion a vote was requested, and I am 
administering the vote.  The vote request was specifically should we have a 
back-port of upgrade in 1.1.0.  Both parties agreed they would live with the 
results.

I would like to point out that not porting upgrades means that the liberty 
branch would essentially become abandoned unless some other brave soul takes up 
a backport.  I would also like to point out that that is yet another exception 
much like thin-containers back-port which was accepted.  See how exceptions 
become the way to the dark side.  We really need to stay exception free going 
forward (in Mitaka and later) as much as possible to prevent expectations that 
we will make exceptions when none should be made.

Please vote +1 (backport) or -1 (don't backport).  An abstain in this case is 
the same as voting -1, so please vote either way.  I will leave the voting open 
for 1 week until April 14th.  If there I a majority in favor, I will close 
voting early.  We currently require 6 votes for majority as our core team 
consists of 11 people.

Regards,
-steve


[1] 
http://eavesdrop.openstack.org/irclogs/%23kolla/%23kolla.2016-03-04.log.html#t2016-03-04T18:23:26

Warning [1] was a pretty heated argument and there may have been some swearing 
:)

voting.

"Should we back-port upgrades
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Fuel-web] Jenkins failing, tox -e py27 error

2016-03-07 Thread Igor Kalnitsky
Hey Jeremy,

I got it and I'm working on patch [1] that must solve it. I simply
stop doing postre setup since it seems is already setup.

[1] https://review.openstack.org/#/c/289278/

On Mon, Mar 7, 2016 at 4:38 PM, Jeremy Stanley  wrote:
> On 2016-03-07 16:02:40 +0530 (+0530), Prameswar Lal wrote:
>> I recently started exploring Fuel. Found a small error in documentation but
>> when i am trying to push the fix, jenkins is throwing following errors:
> [...]
>> | Creating role openstack_citest with password
>> openstack_citest2016-03-07 10:05:18.684
>> 
>> | psql: FATAL:  password authentication failed for user
>> "postgres"2016-03-07 10:05:18.684
>> 
> [...]
>
> Late last week we moved unit test jobs to a new minimal worker type
> which requires installation of distro packages and database
> configuration during job runtime. I tried to recreate (as closely as
> possible) with shell scripts the setup previously performed by
> corresponding Puppet modules which took place during image creation
> for the prior worker type. You can see at
> http://logs.openstack.org/40/289240/1/check/gate-fuel-web-python27/bd6300b/console.html#_2016-03-07_10_04_22_654
> that the job successfully logs into postgres with a root password of
> "insecure_slave" and creates an openstack_citest role with password
> "openstack_citest".
>
> Any help from those more familiar with the database use in Fuel's
> unit tests in narrowing down what's different about this postgrest
> setup vs the old one would be most appreciated.
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara]FFE Request for resume EDP job

2016-03-07 Thread Trevor McKay
My 2 cents, I agree that it is low risk -- the impl for resume is
analogous/parallel to the impl for suspend. And, it makes little 
sense to me to include suspend without resume.

In my mind, these two operations are halves of the same feature,
and since it is already partially implemented and approved, I the
FFE should be granted.

Best,

Trev

On Mon, 2016-03-07 at 09:07 -0500, Trevor McKay wrote:
> For some reason the link below is wrong for me, it goes to a different
> review. Here is a good one (I hope!):
> 
> https://review.openstack.org/#/c/285839/
> 
> Trev
> 
> On Mon, 2016-03-07 at 14:28 +0800, lu jander wrote:
> > Hi folks,
> > 
> > I would like to request a FFE for the feature “Resume EDP job”: 
> > 
> >  
> > 
> > BP:
> > https://blueprints.launchpad.net/sahara/+spec/add-suspend-resume-ability-for-edp-jobs
> > 
> > 
> > Spec has been merged. https://review.openstack.org/#/c/198264/  
> > 
> > 
> > Suspend EDP patch has been merged.
> >  https://review.openstack.org/#/c/201448/ 
> > 
> > 
> > Code Review: https://review.openstack.org/#/c/285839/
> > 
> >  
> > 
> > code is ready for review. 
> > 
> >  
> > 
> > The Benefits for this change: after suspend job, we can resume this
> > job.
> > 
> >  
> > 
> > The Risk: The risk would be low for this patch, since the code of
> > suspend patch has been long time reviewed.
> > 
> >  
> > 
> > Thanks,
> > 
> > luhuichun
> > 
> > 
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Fuel-web] Jenkins failing, tox -e py27 error

2016-03-07 Thread Jeremy Stanley
On 2016-03-07 16:02:40 +0530 (+0530), Prameswar Lal wrote:
> I recently started exploring Fuel. Found a small error in documentation but
> when i am trying to push the fix, jenkins is throwing following errors:
[...]
> | Creating role openstack_citest with password
> openstack_citest2016-03-07 10:05:18.684
> 
> | psql: FATAL:  password authentication failed for user
> "postgres"2016-03-07 10:05:18.684
> 
[...]

Late last week we moved unit test jobs to a new minimal worker type
which requires installation of distro packages and database
configuration during job runtime. I tried to recreate (as closely as
possible) with shell scripts the setup previously performed by
corresponding Puppet modules which took place during image creation
for the prior worker type. You can see at
http://logs.openstack.org/40/289240/1/check/gate-fuel-web-python27/bd6300b/console.html#_2016-03-07_10_04_22_654
that the job successfully logs into postgres with a root password of
"insecure_slave" and creates an openstack_citest role with password
"openstack_citest".

Any help from those more familiar with the database use in Fuel's
unit tests in narrowing down what's different about this postgrest
setup vs the old one would be most appreciated.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] weekly meeting #73

2016-03-07 Thread Emilien Macchi
Hi,

We'll have our weekly meeting tomorrow at 3pm UTC on
#openstack-meeting4.

https://wiki.openstack.org/wiki/Meetings/PuppetOpenStack

As usual, free free to bring topics in this etherpad:
https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160308

We'll also have open discussion for bugs & reviews, so anyone is welcome
to join.

See you there,
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible][security] Security hardening backport to Liberty desirable?

2016-03-07 Thread Major Hayden
On 03/05/2016 06:40 AM, Jesse Pretorius wrote:
> Liberty is a stable branch and the Mitaka release is just around the corner. 
> I think it's a bit late in the game to add it. Consider, also, that deployers 
> can easily consume the role with their own playbook to execute it if they 
> would like to.
> 
> *If* a backport is supported by the consuming community and core team, I 
> would only support an opt-in model to allow deployers to make use of the 
> role, but only if they choose to.

That seems reasonable.  Would it be appropriate to add some documentation in 
the Liberty release that explains how to enable the role with that release?

--
Major Hayden

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Contributing to TripleO is challenging

2016-03-07 Thread Dan Prince
While I agree with some (not all) of the sentiment below I'm not sure I
want to spend the time debating this rather broad set of topics in this
email thread. I'm not sure we'd actually ever see the end of it if we
did. Nor can the upstream TripleO team control all the forces at play
here.

So rather than this... Would it be reasonable to ask that we take this
a step further and split things out into concrete ideas to improve
these areas? Perhaps each in its own spec or email thread so that we
can reach clear conclusions to each problem... a step at a time.

A couple of things to set the record straight:

On the CI issues We actually have some really good ideas on the
table to solve some of these CI problems including architectural
changes like "split stack" ideas which could allow parts of our
overcloud CI to run on normal cloud instances, auto-promoting package
repositories based on nightly periodic jobs, caching our image builds,
etc. Some of these things will open the door to new features like the
ability to run more test suites (which we haven't done yet due to the
long wall time associated with our CI at this point).

There are reasons for TripleO CI, why it exists, why we have put so
much effort into keeping it running over the years. Yes our tests take
a long time to run, and yes we have some things we still do manually,
but we do catch a lot of issues and breakages in both our own and other
OpenStack projects. And while our core team often disagrees on things I
think we do agree that continuing to expand upstream CI coverage on
major features is key to digging out of the hole we are in. 

As for the rest of it I think a lot of it has to do with doing the best
we can with limited upstream resources. To me the real problem driving
a majority of the issues you describe below is simply trying to land X
number of features upstream by a given date with little to no CI
coverage. The sooner we take the time and discipline to stop this the
better.

Dan


On Fri, 2016-03-04 at 09:23 -0500, Emilien Macchi wrote:
> That's not the name of any Summit's talk, it's just an e-mail I
> wanted
> to write for a long time.
> 
> It is an attempt to expose facts or things I've heard a lot; and
> bring
> constructive thoughts about why it's challenging to contribute in
> TripleO project.
> 
> 
> 1/ "I don't review this patch, we don't have CI coverage."
> 
> One thing I've noticed in TripleO is that a very few people are
> involved
> in CI work.
> In my opinion, CI system is more critical than any feature in a
> product.
> Developing Software without tests is a bit like http://goo.gl/OlgFRc
> All people - specially core - in the project should be involved in CI
> work. If you are TripleO core and you don't contribute on CI, you
> might
> ask yourself why.
> 
> 
> 2/ "I don't review this patch, CI is broken."
> 
> Another thing I've noticed in TripleO is that when CI is broken,
> again,
> a very few people are actually working on fixing failures.
> My experience over the last years taught me to stop my daily work
> when
> CI is broken and fix it asap.
> 
> 
> 3/ "I don't review it, because this feature / code is not my area".
> 
> My first though is "Aren't we supposed to be engineers and learn new
> areas?"
> My second though is that I think we have a problem with TripleO Heat
> Templates.
> THT or TripleO Heat Templates's code is 80% of Puppet / Hiera. If
> TripleO core say "I'm not familiar with Puppet", we have a problem
> here,
> isn't?
> Maybe should we split this repository? Or revisit the list of people
> who
> can +2 patches on THT.
> 
> 
> 4/ Patches are stalled. Most of the time.
> 
> Over the last 12 months, I've pushed a lot of patches in TripleO and
> one
> thing I've noticed is that if I don't ping people, my patch got no
> review. And I have to rebase it, every week, because the interface
> changed. I got +2, cool ! Oh, merge conflict. Rebasing. Waiting for
> +2
> again... and so on..
> 
> I personally spent 20% of my time to review code, every day.
> I wrote a blog post about how I'm doing review, with Gertty:
> http://my1.fr/blog/reviewing-puppet-openstack-patches/
> I suggest TripleO folks to spend more time on reviews, for some
> reasons:
> 
> * decreasing frustration from contributors
> * accelerate development process
> * teach new contributors to work on TripleO, and eventually scale-up
> the
> core team. It's a time investment, but worth it.
> 
> In Puppet team, we have weekly triage sessions and it's pretty
> helpful.
> 
> 
> 5/ Most of the tests are run... manually.
> 
> How many times I've heard "I've tested this patch locally, and it
> does
> not work so -1".
> 
> The only test we do in current CI is a ping to an instance.
> Seriously?
> Most of OpenStack CIs (Fuel included), run Tempest, for testing APIs
> and
> real scenarios. And we run a ping.
> That's similar to 1/ but I wanted to raise it too.
> 
> 
> 
> If we don't change our way to work on TripleO, people will be more
> frustrated 

[openstack-dev] [all][api] New API Guidelines Ready for Cross Project Review

2016-03-07 Thread Chris Dent


There are a couple of API-WG proposed guidelines ready for review by
interested parties:

  Header non proliferation:
  https://review.openstack.org/#/c/280381/

  Client interaction guideline for microversions:
  https://review.openstack.org/#/c/243414/

Please have a look at leave your comments. If people agree the
guidelines are good we'll move on to step 4[1]. If not we'll refine
them until they are.

(There are a few other microversion-related guidelines in progress
as well, but the debate on them continues. Feel free to look at them
too:

* https://review.openstack.org/#/c/243041/
* https://review.openstack.org/#/c/243429/

)

Thanks.

[1] http://specs.openstack.org/openstack/api-wg/process.html#review-process

--
Chris Dent   (�s°□°)�s�喋擤ォ�http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [salt] Team meeting this Tuesday

2016-03-07 Thread Clark, Jay
Hi Saltstackers,

A kind reminder for this week's #openstack-salt meeting. More on the agenda [1].

[1]. https://wiki.openstack.org/wiki/Meetings/openstack-salt

Regards,
Jay Clark
Sr. OpenStack Deployment Engineer
E: jason.t.cl...@hpe.com
H: 919.341.4670
M: 919.345.1127
IRC (freenode): jasondotstar


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel][plugins] Should we maintain example plugins?

2016-03-07 Thread Simon Pasquier
What about maintaining a dummy plugin (eg running only one or two very
simple tasks) as a standalone project for the purpose of QA?
IMO it would make more sense than having those example plugins in the
fuel-plugins project...
Regards,
Simon

On Mon, Mar 7, 2016 at 2:49 PM, Igor Kalnitsky 
wrote:

> > and really lowering barriers for people who just begin create plugins.
>
> Nonsense. First, people usually create them via running `fpb --create
> plugin-name` that generates plugin boilerplate. And that boilerplate
> won't contain that changes.
>
> Second, if people ain't smart enough to change few lines in
> `metadata.yaml` of generated boilerplate to make it work with latest
> Fuel, maybe it's better for them to do not develop plugins at all?
>
> On Fri, Mar 4, 2016 at 2:24 PM, Stanislaw Bogatkin
>  wrote:
> > +1 to maintain example plugins. It is easy enough and really lowering
> > barriers for people who just begin create plugins.
> >
> > On Fri, Mar 4, 2016 at 2:08 PM, Matthew Mosesohn  >
> > wrote:
> >>
> >> Igor,
> >>
> >> It seems you are proposing an IKEA approach to plugins. Take Fuel's
> >> example plugin, add in the current Fuel release, and then build it. We
> >> maintained these plugins in the past, but now it should a manual step
> >> to test it out on the current release.
> >>
> >> What would be a more ideal situation that meets the needs of users and
> >> QA? Right now we have failed tests until we can decide on a solution
> >> that works for everybody.
> >>
> >> On Fri, Mar 4, 2016 at 1:26 PM, Igor Kalnitsky  >
> >> wrote:
> >> > No, this is a wrong road to go.
> >> >
> >> > What if in Fuel 10 we drop v1 plugins support? What should we do?
> >> > Remove v1 example from source tree? That doesn't seem good to me.
> >> >
> >> > Example plugins are only examples. The list of supported releases must
> >> > be maintained on system test side, and system tests must inject that
> >> > information into plugin's metadata.yaml and test it.
> >> >
> >> > Again, I don't say we shouldn't test plugins. I say, tests should be
> >> > responsible for preparing plugins. I can say even more: tests should
> >> > not rely on what is produced by plugins, since it's something that
> >> > could be changed and tests start failing.
> >> >
> >> > On Thu, Mar 3, 2016 at 7:54 PM, Swann Croiset 
> >> > wrote:
> >> >> IMHO it is important to keep plugin examples and keep testing them,
> >> >> very
> >> >> valuable for plugin developers.
> >> >>
> >> >> For example, I've encountered [0] the case where "plugin as role"
> >> >> feature
> >> >> wasn't easily testable with fuel-qa because not compliant with the
> last
> >> >> plugin data structure,
> >> >> and more recently we've spotted a regression [1] with
> "vip-reservation"
> >> >> feature introduced by a change in nailgun.
> >> >> These kind of issues are time consuming for plugin developers and
> >> >> can/must
> >> >> be avoided by testing them.
> >> >>
> >> >> I don't even understand why the question is raised while fuel plugins
> >> >> are
> >> >> supposed to be supported and more and more used [3], even by murano
> [4]
> >> >> ...
> >> >>
> >> >> [0] https://bugs.launchpad.net/fuel/+bug/1543962
> >> >> [1] https://bugs.launchpad.net/fuel/+bug/1551320
> >> >> [3]
> >> >>
> >> >>
> http://lists.openstack.org/pipermail/openstack-dev/2016-February/085636.html
> >> >> [4] https://review.openstack.org/#/c/286310/
> >> >>
> >> >> On Thu, Mar 3, 2016 at 3:19 PM, Matthew Mosesohn
> >> >> 
> >> >> wrote:
> >> >>>
> >> >>> Hi Fuelers,
> >> >>>
> >> >>> I would like to bring your attention a dilemma we have here. It
> seems
> >> >>> that there is a dispute as to whether we should maintain the
> releases
> >> >>> list for example plugins[0]. In this case, this is for adding
> version
> >> >>> 9.0 to the list.
> >> >>>
> >> >>> Right now, we run a swarm test that tries to install the example
> >> >>> plugin and do a deployment, but it's failing only for this reason. I
> >> >>> should add that this is the only automated daily test that will
> verify
> >> >>> that our plugin framework actually works. During the Mitaka
> >> >>> development  cycle, we already had an extended period where plugins
> >> >>> were broken[1]. Removing this test (or leaving it permanently red,
> >> >>> which is effectively the same), would raise the risk to any member
> of
> >> >>> the Fuel community who depends on plugins actually working.
> >> >>>
> >> >>> The other impact of abandoning maintenance of example plugins is
> that
> >> >>> it means that a given interested Fuel Plugin developer would not be
> >> >>> able to easily get started with plugin development. It might not be
> >> >>> inherently obvious to add the current Fuel release to the
> >> >>> metadata.yaml file and it would likely discourage such a user. In
> this
> >> >>> case, I would propose that we 

Re: [openstack-dev] [Neutron][tempest] Timestamp service extension breaks CI

2016-03-07 Thread Armando M.
On 7 March 2016 at 01:02, Gary Kotton  wrote:

> I do not think that this is a bug in the plugin. Why are we not doing the
> changes in the base class (unless that is not possible). Having an extra
> read when a resources is created seems like a little of an overkill. I
> understand that it is what is done at the moment.
> I think that at the summit we should try and discuss how we can manage
> extensions better. Maybe the time has even come for us to consider the V3
> neutron API and to make all of the ‘default core services’ as part of the
> official API. So we will not have to do certain hacks to get the plugins to
> work.
>

During the past summit (and the ones before) we talked about how to go
towards a new micro-versioned base model for the API (we even have a
proposal in [1]), but that's a topic that keeps failing to gain traction. I
would love us to make progress on this. Are you willing to sponsor it by
any chance?

[1] https://review.openstack.org/#/c/136760/


>
>

> From: Kevin Benton 
> Reply-To: OpenStack List 
> Date: Sunday, March 6, 2016 at 11:27 PM
> To: OpenStack List 
> Subject: Re: [openstack-dev] [Neutron][tempest] Timestamp service
> extension breaks CI
>
> Keep in mind that fix for ML2 is the correct behavior, not a workaround.
> It was not including extension data in create calls so there was an API
> difference between a create and a get/update of the same object. It's now
> calling the extensions to let them populate their fields of the dict.
>
> If you're plugin does not exhibit the correct behavior in this case, I
> would just disable the test in question because it sounds like a bug in the
> plugin, not the test. It's reasonable to expect the timestamps that will be
> visible on every other API call to also be visible in create calls.
> Hi,
> Gal Sagie pointed me to patch in ML2 and OVN that address this by
> re-reading the networks and ports to ensure that the information is read.
> For those interested and whom it affects please see:
> ML2 - https://review.openstack.org/#/c/276219/
> *OVN - https://review.openstack.org/#/c/277844/
> *
>
> Thanks
> Gary
>
> From: Gary Kotton 
> Reply-To: OpenStack List 
> Date: Sunday, March 6, 2016 at 4:04 PM
> To: OpenStack List 
> Subject: [openstack-dev] [Neutron][tempest] Timestamp service extension
> breaks CI
>
> Hi,
> The commit
> https://review.openstack.org/#q,4c2c983618ddb7a528c9005b0d7aaf5322bd198d,n,z 
> causes
> the CI to fail. This is due to the fact that the port creation does not
> return the created_at and updated_at keys. The tempest test that the keys
> are the same. Please see [I]
> I posted patch https://review.openstack.org/289017 to address this. I am
> not sure if this is the correct way to go.
> There are far too many API changes that should not be breaking things at
> this very stage in the cycle.
> Thanks
> Gary
>
> [I]
>
> ft29.11: 
> tempest.api.network.test_ports.PortsTestJSON.test_show_port[id-c9a685bd-e83f-499c-939f-9f7863ca259f,smoke]_StringException:
>  Empty attachments:
>   stderr
>   stdout
>
> pythonlogging:'': {{{
> 2016-03-06 01:05:00,301 27371 INFO [tempest.lib.common.rest_client] 
> Request (PortsTestJSON:test_show_port): 200 GET 
> http://192.168.254.234:9696/v2.0/ports/f00d5dcc-4143-4f63-8c7c-0ea8d566c87b 
> 
>  0.245s
> 2016-03-06 01:05:00,302 27371 DEBUG[tempest.lib.common.rest_client] 
> Request - Headers: {'Content-Type': 'application/json', 'Accept': 
> 'application/json', 'X-Auth-Token': ''}
> Body: None
> Response - Headers: {'status': '200', 'content-length': '532', 
> 'content-location': 
> 'http://192.168.254.234:9696/v2.0/ports/f00d5dcc-4143-4f63-8c7c-0ea8d566c87b 
> ',
>  'connection': 'close', 'date': 'Sun, 06 Mar 2016 09:05:00 GMT', 
> 'content-type': 'application/json; charset=UTF-8', 'x-openstack-request-id': 
> 'req-2825ed72-1417-4cf9-b37f-4894fa5b0b0f'}
> Body: {"port": {"status": "ACTIVE", "description": "", 
> "allowed_address_pairs": [], "admin_state_up": true, "network_id": 
> "4545014d-1e11-4b73-a5e3-bcdcd992478e", "name": "", "created_at": 
> "2016-03-06T09:01:56", "mac_address": 

Re: [openstack-dev] Reg: Configuration Management tool for Openstack.

2016-03-07 Thread Jeremy Stanley
On 2016-03-07 12:46:45 +0530 (+0530), cool dharma06 wrote:
[...]
> 2. And also i checked the Github repository for openstack-puppet, i think
> its not active.
[...]

The "puppet-openstack" composition layer module was deprecated in
Juno, but their other modules are active. They have a page here with
some basic information and a list of them:

https://wiki.openstack.org/wiki/Puppet

-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara]FFE Request for resume EDP job

2016-03-07 Thread Trevor McKay
For some reason the link below is wrong for me, it goes to a different
review. Here is a good one (I hope!):

https://review.openstack.org/#/c/285839/

Trev

On Mon, 2016-03-07 at 14:28 +0800, lu jander wrote:
> Hi folks,
> 
> I would like to request a FFE for the feature “Resume EDP job”: 
> 
>  
> 
> BP:
> https://blueprints.launchpad.net/sahara/+spec/add-suspend-resume-ability-for-edp-jobs
> 
> 
> Spec has been merged. https://review.openstack.org/#/c/198264/  
> 
> 
> Suspend EDP patch has been merged.
>  https://review.openstack.org/#/c/201448/ 
> 
> 
> Code Review: https://review.openstack.org/#/c/285839/
> 
>  
> 
> code is ready for review. 
> 
>  
> 
> The Benefits for this change: after suspend job, we can resume this
> job.
> 
>  
> 
> The Risk: The risk would be low for this patch, since the code of
> suspend patch has been long time reviewed.
> 
>  
> 
> Thanks,
> 
> luhuichun
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel][plugins] Should we maintain example plugins?

2016-03-07 Thread Igor Kalnitsky
> and really lowering barriers for people who just begin create plugins.

Nonsense. First, people usually create them via running `fpb --create
plugin-name` that generates plugin boilerplate. And that boilerplate
won't contain that changes.

Second, if people ain't smart enough to change few lines in
`metadata.yaml` of generated boilerplate to make it work with latest
Fuel, maybe it's better for them to do not develop plugins at all?

On Fri, Mar 4, 2016 at 2:24 PM, Stanislaw Bogatkin
 wrote:
> +1 to maintain example plugins. It is easy enough and really lowering
> barriers for people who just begin create plugins.
>
> On Fri, Mar 4, 2016 at 2:08 PM, Matthew Mosesohn 
> wrote:
>>
>> Igor,
>>
>> It seems you are proposing an IKEA approach to plugins. Take Fuel's
>> example plugin, add in the current Fuel release, and then build it. We
>> maintained these plugins in the past, but now it should a manual step
>> to test it out on the current release.
>>
>> What would be a more ideal situation that meets the needs of users and
>> QA? Right now we have failed tests until we can decide on a solution
>> that works for everybody.
>>
>> On Fri, Mar 4, 2016 at 1:26 PM, Igor Kalnitsky 
>> wrote:
>> > No, this is a wrong road to go.
>> >
>> > What if in Fuel 10 we drop v1 plugins support? What should we do?
>> > Remove v1 example from source tree? That doesn't seem good to me.
>> >
>> > Example plugins are only examples. The list of supported releases must
>> > be maintained on system test side, and system tests must inject that
>> > information into plugin's metadata.yaml and test it.
>> >
>> > Again, I don't say we shouldn't test plugins. I say, tests should be
>> > responsible for preparing plugins. I can say even more: tests should
>> > not rely on what is produced by plugins, since it's something that
>> > could be changed and tests start failing.
>> >
>> > On Thu, Mar 3, 2016 at 7:54 PM, Swann Croiset 
>> > wrote:
>> >> IMHO it is important to keep plugin examples and keep testing them,
>> >> very
>> >> valuable for plugin developers.
>> >>
>> >> For example, I've encountered [0] the case where "plugin as role"
>> >> feature
>> >> wasn't easily testable with fuel-qa because not compliant with the last
>> >> plugin data structure,
>> >> and more recently we've spotted a regression [1] with "vip-reservation"
>> >> feature introduced by a change in nailgun.
>> >> These kind of issues are time consuming for plugin developers and
>> >> can/must
>> >> be avoided by testing them.
>> >>
>> >> I don't even understand why the question is raised while fuel plugins
>> >> are
>> >> supposed to be supported and more and more used [3], even by murano [4]
>> >> ...
>> >>
>> >> [0] https://bugs.launchpad.net/fuel/+bug/1543962
>> >> [1] https://bugs.launchpad.net/fuel/+bug/1551320
>> >> [3]
>> >>
>> >> http://lists.openstack.org/pipermail/openstack-dev/2016-February/085636.html
>> >> [4] https://review.openstack.org/#/c/286310/
>> >>
>> >> On Thu, Mar 3, 2016 at 3:19 PM, Matthew Mosesohn
>> >> 
>> >> wrote:
>> >>>
>> >>> Hi Fuelers,
>> >>>
>> >>> I would like to bring your attention a dilemma we have here. It seems
>> >>> that there is a dispute as to whether we should maintain the releases
>> >>> list for example plugins[0]. In this case, this is for adding version
>> >>> 9.0 to the list.
>> >>>
>> >>> Right now, we run a swarm test that tries to install the example
>> >>> plugin and do a deployment, but it's failing only for this reason. I
>> >>> should add that this is the only automated daily test that will verify
>> >>> that our plugin framework actually works. During the Mitaka
>> >>> development  cycle, we already had an extended period where plugins
>> >>> were broken[1]. Removing this test (or leaving it permanently red,
>> >>> which is effectively the same), would raise the risk to any member of
>> >>> the Fuel community who depends on plugins actually working.
>> >>>
>> >>> The other impact of abandoning maintenance of example plugins is that
>> >>> it means that a given interested Fuel Plugin developer would not be
>> >>> able to easily get started with plugin development. It might not be
>> >>> inherently obvious to add the current Fuel release to the
>> >>> metadata.yaml file and it would likely discourage such a user. In this
>> >>> case, I would propose that we remove example plugins from fuel-plugins
>> >>> GIT repo if they are not maintained. Non-functioning code is worse
>> >>> than deleted code in my opinion.
>> >>>
>> >>> Please share your opinions and let's decide which way to go with this
>> >>> bug[2]
>> >>>
>> >>> [0] https://github.com/openstack/fuel-plugins/tree/master/examples
>> >>> [1] https://bugs.launchpad.net/fuel/+bug/1544505
>> >>> [2] https://launchpad.net/bugs/1548340
>> >>>
>> >>> Best Regards,
>> >>> Matthew Mosesohn
>> >>>
>> >>>
>> >>> 

Re: [openstack-dev] [oslo] Common RPC Message Trace Mechanism

2016-03-07 Thread Ken Giusti
Hi,

The 'trace' boolean offered by the AMQP 1.0 driver exposes a debug feature
that is provided by the Proton library.  This is specific to the Proton
library - I'm not sure kombu/zmq/etc offer a similar feature.

As Xuanzhou points out, this debug tool merely prints to stdout a summary
of each AMQP 1.0 protocol frame before it is written/after it is read from
the socket.  It prints the entire protocol exchange (control frames, etc)
and is not limited to just the message traffic.  Given that, I don't think
the transport drivers can implement such a low level debug feature unless
it is offered by the protocol libraries themselves.

-K


On Sun, Mar 6, 2016 at 11:55 PM Xuanzhou Perry Dong 
wrote:

> Hi, Boris,
>
> Thanks for your response.
>
> I refer to the very simple type of "trace": just print out the RPC
> messages to stdout/stderr/syslog. I have checked the osprofiler project and
> think that it is very good and could solve my problem if it is used by the
> Openstack projects to trace their RPC calls.
>
>
> Best Regards,
> Xuanzhou Perry Dong
>
> At 2016-03-07 12:27:12, "Boris Pavlovic"  wrote:
>
> Xuanzhou,
>
> I am not sure what do you mean by "trace". But if you need something that
> allows to do cross service/project tracing then you should take a look at
> osprofiler:
> https://github.com/openstack/osprofiler
>
> Best regards,
> Boris Pavlovic
>
> On Sun, Mar 6, 2016 at 8:15 PM, Xuanzhou Perry Dong 
> wrote:
>
>> Hi,
>>
>> I am looking for a common RPC message trace mechanism in oslo_messaging.
>> This message trace mechanism needs to be common to all drivers. Currently
>> some documentation mentions that oslo_messaging_amqp.trace can activate the
>> message trace (say,
>> http://docs.openstack.org/liberty/config-reference/content/networking-configuring-rpc.html).
>> But it seems that this oslo_messaging_amqp.trace is only available to the
>> Proton driver.
>>
>> Do I miss any existing common RPC message trace mechanism in oslo? If
>> there is no such mechanism, I would propose to create such a mechanism for
>> oslo.
>>
>> Any response is appreciated.
>> Thanks.
>> Best Regards,
>> Xuanzhou Perry Dong
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-03-07 Thread Corey O'Brien
Hongbin, I think the offer to support different OS options is a perfect
example both of what we want and what we don't want. We definitely want to
allow for someone like yourself to maintain templates for whatever OS they
want and to have that option be easily integrated in to a Magnum
deployment. However, when developing features or bug fixes, we can't wait
for you to have time to add it for whatever OS you are promising to
maintain. Instead, we would all be forced to develop the feature for that
OS as well. If every member of the team had a special OS like that we'd all
have to maintain all of them.

Alternatively, what was agreed on by most at the midcycle was that if
someone like yourself wanted to support a specific OS option, we would have
an easy place for those contributions to go without impacting the rest of
the team. The team as a whole would agree to develop all features for at
least the reference OS. Then individuals or companies who are passionate
about an alternative OS can develop the features for that OS.

Corey

On Sat, Mar 5, 2016 at 12:30 AM Hongbin Lu  wrote:

>
>
>
>
> *From:* Adrian Otto [mailto:adrian.o...@rackspace.com]
> *Sent:* March-04-16 6:31 PM
>
>
> *To:* OpenStack Development Mailing List (not for usage questions)
>
> *Subject:* Re: [openstack-dev] [magnum] Discussion of supporting
> single/multiple OS distro
>
>
>
> Steve,
>
>
>
> On Mar 4, 2016, at 2:41 PM, Steven Dake (stdake)  wrote:
>
>
>
> *From: *Adrian Otto 
> *Reply-To: *"OpenStack Development Mailing List (not for usage
> questions)" 
> *Date: *Friday, March 4, 2016 at 12:48 PM
> *To: *"OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> *Subject: *Re: [openstack-dev] [magnum] Discussion of supporting
> single/multiple OS distro
>
>
>
> Hongbin,
>
>
>
> To be clear, this pursuit is not about what OS options cloud operators can
> select. We will be offering a method of choice. It has to do with what we
> plan to build comprehensive testing for,
>
> This is easy. Once we build comprehensive tests for the first OS, just
> re-run it for other OS(s).
>
>
>
> and the implications that has on our pace of feature development. My
> guidance here is that we resist the temptation to create a system with more
> permutations than we can possibly support. The relation between bay node
> OS, Heat Template, Heat Template parameters, COE, and COE dependencies
> (could-init, docker, flannel, etcd, etc.) are multiplicative in nature.
> From the mid cycle, it was clear to me that:
>
>
>
> 1) We want to test at least one OS per COE from end-to-end with
> comprehensive functional tests.
>
> 2) We want to offer clear and precise integration points to allow cloud
> operators to substitute their own OS in place of whatever one is the
> default for the given COE.
>
>
>
> A COE shouldn’t have a default necessarily that locks out other defaults.
> Magnum devs are the experts in how these systems operate, and as such need
> to take on the responsibility of the implementation for multi-os support.
>
>
>
> 3) We want to control the total number of configuration permutations to
> simplify our efforts as a project. We agreed that gate testing all possible
> permutations is intractable.
>
>
>
> I disagree with this point, but I don't have the bandwidth available to
> prove it ;)
>
>
>
> That’s exactly my point. It takes a chunk of human bandwidth to carry that
> responsibility. If we had a system engineer assigned from each of the
> various upstream OS distros working with Magnum, this would not be a big
> deal. Expecting our current contributors to support a variety of OS
> variants is not realistic.
>
> You have my promise to support an additional OS for 1 or 2 popular COEs.
>
>
>
> Change velocity among all the components we rely on has been very high. We
> see some of our best contributors frequently sidetracked in the details of
> the distros releasing versions of code that won’t work with ours. We want
> to upgrade a component to add a new feature, but struggle to because the
> new release of the distro that offers that component is otherwise
> incompatible. Multiply this by more distros, and we expect a real problem.
>
> At Magnum upstream, the overhead doesn’t seem to come from the OS.
> Perhaps, that is specific to your downstream?
>
>
>
> There is no harm if you have 30 gates running the various combinations.
> Infrastructure can handle the load.  Whether devs have the cycles to make a
> fully bulletproof gate is the question I think you answered with the word
> intractable.
>
>
>
> Actually, our existing gate tests are really stressing out our CI infra.
> At least one of the new infrastructure providers that replaced HP have
> equipment that runs considerably slower. For example, our swam functional
> gate now frequently fails because it can’t finish before the allowed time
> limit of 

Re: [openstack-dev] [nova] How to fix the wrong usage of 'format' jsonschema keyword in server create API

2016-03-07 Thread Jay Pipes

On 03/07/2016 08:05 AM, Alex Xu wrote:

2016-03-07 19:23 GMT+08:00 Sean Dague >:

On 03/07/2016 01:18 AM, Alex Xu wrote:
> Hi,
>
> Due to this regression bughttps://launchpad.net/bugs/1552888, found we
> are using 'formart' jsonschema keyword in wrong way. The 'format' is not
> only for string instance, but all the types.

Can you given an example of the kinds of things that are currently being
rejected? And if we think those REST API calls are valid? I'd like to
know what we started blocking in Option 3 that no one noticed until now.


In the legacy v2 API, we create server with network like this:
"networks": [{"uuid": "f4001fde-7bb8-4a73-b1a9-03b444d1f6f8", "port":
null}]'
The port can be null.

With v2.1 API, you will get 400:

curl -g -i -X POST
http://192.168.2.176:8774/v2.1/b90b53ed87d74e19806da34dbaa056c9/servers
-H "User-Agent: python-novaclient" -H "Content-Type: application/json"
-H "Accept: application/json" -H "X-Auth-Token:
e740965218754560a98d9ac188271253" -d '{"server": {"name": "vm4",
"imageRef": "33a713dc-7efe-488c-bf12-d902ff5e6118", "flavorRef": "1",
"max_count": 1, "min_count": 1, "networks": [{"uuid":
"f4001fde-7bb8-4a73-b1a9-03b444d1f6f8", "port": null}]}}'
HTTP/1.1 400 Bad Request
X-Openstack-Nova-Api-Version: 2.1
Vary: X-OpenStack-Nova-API-Version
Content-Type: application/json; charset=UTF-8
Content-Length: 117
X-Compute-Request-Id: req-c5ab91ca-dc24-42ea-8272-7f35571b15da
Date: Mon, 07 Mar 2016 13:01:58 GMT

{"badRequest": {"message": "Invalid input for field/attribute port.
Value: None. None is not a 'uuid'", "code": 400}}

This is due to we write json-schema like this:
'port': {
'type': ['string', 'null'],
format': 'uuid'
  },


We assume 'type' will enable 'null' value and 'format' only against on
string type. Actually 'null' will be passed to format check also, then
the format check return fault.


So, 'null' should be removed from there and the required set of 
attributes should have 'uuid' removed. Is that correct?


If so, I think it should be fine to correct it without a microversion.

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Cancelling team meeting

2016-03-07 Thread Renat Akhmerov
Hi,

We are cancelling team meeting today since a number of team members are on 
holidays.

Renat
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How to fix the wrong usage of 'format' jsonschema keyword in server create API

2016-03-07 Thread Alex Xu
2016-03-07 19:23 GMT+08:00 Sean Dague :

> On 03/07/2016 01:18 AM, Alex Xu wrote:
> > Hi,
> >
> > Due to this regression bug https://launchpad.net/bugs/1552888, found we
> > are using 'formart' jsonschema keyword in wrong way. The 'format' is not
> > only for string instance, but all the types.
>
> Can you given an example of the kinds of things that are currently being
> rejected? And if we think those REST API calls are valid? I'd like to
> know what we started blocking in Option 3 that no one noticed until now.
>
>
In the legacy v2 API, we create server with network like this:
"networks": [{"uuid": "f4001fde-7bb8-4a73-b1a9-03b444d1f6f8", "port":
null}]'
The port can be null.

With v2.1 API, you will get 400:

curl -g -i -X POST
http://192.168.2.176:8774/v2.1/b90b53ed87d74e19806da34dbaa056c9/servers -H
"User-Agent: python-novaclient" -H "Content-Type: application/json" -H
"Accept: application/json" -H "X-Auth-Token:
e740965218754560a98d9ac188271253" -d '{"server": {"name": "vm4",
"imageRef": "33a713dc-7efe-488c-bf12-d902ff5e6118", "flavorRef": "1",
"max_count": 1, "min_count": 1, "networks": [{"uuid":
"f4001fde-7bb8-4a73-b1a9-03b444d1f6f8", "port": null}]}}'
HTTP/1.1 400 Bad Request
X-Openstack-Nova-Api-Version: 2.1
Vary: X-OpenStack-Nova-API-Version
Content-Type: application/json; charset=UTF-8
Content-Length: 117
X-Compute-Request-Id: req-c5ab91ca-dc24-42ea-8272-7f35571b15da
Date: Mon, 07 Mar 2016 13:01:58 GMT

{"badRequest": {"message": "Invalid input for field/attribute port. Value:
None. None is not a 'uuid'", "code": 400}}

This is due to we write json-schema like this:
'port': {
   'type': ['string', 'null'],
   format': 'uuid'
 },


We assume 'type' will enable 'null' value and 'format' only against on
string type. Actually 'null' will be passed to format check also, then the
format check return fault.




> -Sean
>
> --
> Sean Dague
> http://dague.net
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][elections] Election Season, PTL and TC March/April 2016

2016-03-07 Thread Tristan Cacqueray
PTL Election details:
  https://wiki.openstack.org/wiki/PTL_Elections_March_2016
TC Election details:
  https://wiki.openstack.org/wiki/TC_Elections_April_2016

Please read the stipulations and timelines for candidates and electorate
contained in these wikipages.

There will be an announcement email opening nominations as well as an
announcement email opening the polls.

Please note that election's workflow is now based on gerrit through the
new openstack/election repository. All candidacies must be submitted as
a text file to the openstack/election repository. Please check the
instructions on the wiki documentation.

Be aware, in the PTL elections if the program only has one candidate,
that candidate is acclaimed and there will be no poll. There will only
be a poll if there is more than one candidate stepping forward for a
program's PTL position.

There will be further announcements posted to the mailing list as action
is required from the electorate or candidates. This email is for
information purposes only.

If you have any questions which you feel affect others please reply to
this email thread. If you have any questions that you which to discuss
in private please email both myself Tristan Cacqueray (tristanC) email:
tdecacqu at redhat dot com and Tony Breed (tonyb) email: tony at
bakeyournoodle dot com so that we may address your concerns.

Thank you,
Tristan



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry][aodh] "aodh alarm list" vs "aodh alarm search"

2016-03-07 Thread Julien Danjou
On Mon, Mar 07 2016, gordon chung wrote:

> shall we drop 'alarm search' nomenclature and use just 'alarm list' for 
> both queries (standard and complex). the concern i have right now is the 
> proposal is to add standard query support to 'alarm list' while complex 
> query support is in 'alarm search'. this is very confusing especially 
> because both commands use '--query' as their option input.

So you have to differentiate 2 things: the REST API and the CLI.
You can't merge both on the REST API side, because the complex query
system require to send a JSON object, so that requires a POST – whereas
listing is simply done via GET. That part we can't merge together.

Now, if your plan is to merge both command on the CLI side, I see no
objection. That's probably a good UX point.

-- 
Julien Danjou
# Free Software hacker
# https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] CI jobs failures

2016-03-07 Thread Dan Prince
On Sat, 2016-03-05 at 11:15 -0500, Emilien Macchi wrote:
> I'm kind of hijacking Dan's e-mail but I would like to propose some
> technical improvements to stop having so much CI failures.
> 
> 
> 1/ Stop creating swap files. We don't have SSD, this is IMHO a
> terrible
> mistake to swap on files because we don't have enough RAM. In my
> experience, swaping on non-SSD disks is even worst that not having
> enough RAM. We should stop doing that I think.
> 
> 
> 2/ Split CI jobs in scenarios.
> 
> Currently we have CI jobs for ceph, HA, non-ha, containers and the
> current situation is that jobs fail randomly, due to performances
> issues.
> 
> Puppet OpenStack CI had the same issue where we had one integration
> job
> and we never stopped adding more services until all becomes *very*
> unstable. We solved that issue by splitting the jobs and creating
> scenarios:
> 
> https://github.com/openstack/puppet-openstack-integration#description
> 
> What I propose is to split TripleO jobs in more jobs, but with less
> services.
> 
> The benefit of that:
> 
> * more services coverage
> * jobs will run faster
> * less random issues due to bad performances
> 
> The cost is of course it will consume more resources.
> That's why I suggest 3/.
> 
> We could have:
> 
> * HA job with ceph and a full compute scenario (glance, nova, cinder,
> ceilometer, aodh & gnocchi).
> * Same with IPv6 & SSL.
> * HA job without ceph and full compute scenario too
> * HA job without ceph and basic compute (glance and nova), with extra
> services like Trove, Sahara, etc.
> * ...
> (note: all jobs would have network isolation, which is to me a
> requirement when testing an installer like TripleO).

I'm not sure we have enough resources to entertain this option. I would
like to see us split the jobs up but not in exactly the way you
describe above. I would rather see us put the effort into architecture
changes like "split stack" which cloud allow us to test the
configuration side of our Heat stack on normal Cloud instances. Once we
have this in place I think we would have more potential resources and
could entertain running more jobs to and thus could split things out to
run in parallel if we choose to do so.

> 
> 3/ Drop non-ha job.
> I'm not sure why we have it, and the benefit of testing that
> comparing
> to HA.

A couple of reasons we have the nonha job I think. First is that not
everyone wants to use HA. We run our own TripleO CI cloud without HA at
this point and I think there is interest in maintaining this as a less
complex installation alternative where HA isn't needed.

Second is need to support functionally testing TripleO where developers
don't have enough resources for 3 controller nodes. At the very least
we'd need a second single node HA job (which wouldn't really be doing
HA) but would allow us to continue supporting the compressed
installation for developer testing, etc.

Dan

> 
> 
> Any comment / feedback is welcome,
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Solar] Weekly update

2016-03-07 Thread Jedrzej Nowak
Hi,

Here is weekly update from Solar team

F2S:
- mapped all fuel-library tasks to solar resources
- removed resources from repo and generating them from installed
fuel-library version
- better UX of f2s usage
- sucessfully deployed controller+compute+cinder scenario
- working on running bvt and swarm tests
- activities on packaging

Solar itself:
- various UX improvements
- various bugs fixed
- packaging improvements



--
Warm regards,
Jedrzej Nowak

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry][aodh] "aodh alarm list" vs "aodh alarm search"

2016-03-07 Thread gordon chung


On 07/03/2016 3:15 AM, Julien Danjou wrote:
> On Fri, Mar 04 2016, liusheng wrote:
>
>> Hi folks,
>> Currently, we have supported "aodh alarm list" and "aodh alarm search" 
>> commands
>> to query alarms.  They both need mandatory "--type" parameter, and I want to
>> drop the limitation[1]. if we agree that, the "alarm list"  will only used to
>> list all alarms and don't support any query pamareters, it will be equal to
>> "alarm search" command without any --query parameter specified.  The "alarm
>> search" command is designed to support complex query which can perform almost
>> all the filtering query, the complex query has been supportted in Gnocchi.  
>> IRC
>> meeting disscussions [3].
>>
>> So we don't need two overlapping interfaces and want to drop one, "alarm 
>> list"
>> or "alarm search" ?
>>
>> i. The "alarm search" need users to post a expression in JSON format to 
>> perform
>> spedific query, it is not easy to use and it is unlike a customary practice 
>> (I
>> mean the most common usages of filtering query of other openstack projects),
>> compare to "alarm list --filter xxx=zzz" usage.
>>
>> ii. we don't have a strong requirement to support *complex* query scenarios 
>> of
>> alarms, we only have alarms and alarm history records in aodh.
>>
>> iii. I personally think it is enough to support filtering query with 
>> "--filter"
>> which is easy to implement [2], and, we have plan to support pagination query
>> in aodh.
>>
>> any thoughts ?
>
> I agree that filtering is probably enough in Aodh use case.
>
> OTOH, we have support for complex query already merged in, and we can
> share the code with Gnocchi (and probably Ceilometer).
>
> `alarm list' is actually `GET /v2/alarms', which is very REST-y, so I
> don't think we should drop it.
>

shall we drop 'alarm search' nomenclature and use just 'alarm list' for 
both queries (standard and complex). the concern i have right now is the 
proposal is to add standard query support to 'alarm list' while complex 
query support is in 'alarm search'. this is very confusing especially 
because both commands use '--query' as their option input.

cheers,

-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] CI jobs failures

2016-03-07 Thread John Trowbridge


On 03/06/2016 11:58 AM, James Slagle wrote:
> On Sat, Mar 5, 2016 at 11:15 AM, Emilien Macchi  wrote:
>> I'm kind of hijacking Dan's e-mail but I would like to propose some
>> technical improvements to stop having so much CI failures.
>>
>>
>> 1/ Stop creating swap files. We don't have SSD, this is IMHO a terrible
>> mistake to swap on files because we don't have enough RAM. In my
>> experience, swaping on non-SSD disks is even worst that not having
>> enough RAM. We should stop doing that I think.
> 
> We have been relying on swap in tripleo-ci for a little while. While
> not ideal, it has been an effective way to at least be able to test
> what we've been testing given the amount of physical RAM that is
> available.
> 
> The recent change to add swap to the overcloud nodes has proved to be
> unstable. But that has more to do with it being racey with the
> validation deployment afaict. There are some patches currently up to
> address those issues.
> 
>>
>>
>> 2/ Split CI jobs in scenarios.
>>
>> Currently we have CI jobs for ceph, HA, non-ha, containers and the
>> current situation is that jobs fail randomly, due to performances issues.
>>
>> Puppet OpenStack CI had the same issue where we had one integration job
>> and we never stopped adding more services until all becomes *very*
>> unstable. We solved that issue by splitting the jobs and creating scenarios:
>>
>> https://github.com/openstack/puppet-openstack-integration#description
>>
>> What I propose is to split TripleO jobs in more jobs, but with less
>> services.
>>
>> The benefit of that:
>>
>> * more services coverage
>> * jobs will run faster
>> * less random issues due to bad performances
>>
>> The cost is of course it will consume more resources.
>> That's why I suggest 3/.
>>
>> We could have:
>>
>> * HA job with ceph and a full compute scenario (glance, nova, cinder,
>> ceilometer, aodh & gnocchi).
>> * Same with IPv6 & SSL.
>> * HA job without ceph and full compute scenario too
>> * HA job without ceph and basic compute (glance and nova), with extra
>> services like Trove, Sahara, etc.
>> * ...
>> (note: all jobs would have network isolation, which is to me a
>> requirement when testing an installer like TripleO).
> 
> Each of those jobs would at least require as much memory as our
> current HA job. I don't see how this gets us to using less memory. The
> HA job we have now already deploys the minimal amount of services that
> is possible given our current architecture. Without the composable
> service roles work, we can't deploy less services than we already are.
> 
> 
> 
>>
>> 3/ Drop non-ha job.
>> I'm not sure why we have it, and the benefit of testing that comparing
>> to HA.
> 
> In my opinion, I actually think that we could drop the ceph and non-ha
> job from the check-tripleo queue.
> 
> non-ha doesn't test anything realistic, and it doesn't really provide
> any faster feedback on patches. It seems at most it might run 15-20
> minutes faster than the HA job on average. Sometimes it even runs
> slower than the HA job.
> 
> The ceph job we could move to the experimental queue to run on demand
> on patches that might affect ceph, and it could also be a daily
> periodic job.
> 
> The same could be done for the containers job, an IPv6 job, and an
> upgrades job. Ideally with a way to run an individual job as needed.
> Would we need different experimental queues to do that?
> 
> That would leave only the HA job in the check queue, which we should
> run with SSL and network isolation. We could deploy less testenv's
> since we'd have less jobs running, but give the ones we do deploy more
> RAM. I think this would really alleviate a lot of the transient
> intermittent failures we get in CI currently. It would also likely run
> faster.
> 
> It's probably worth seeking out some exact evidence from the RDO
> centos-ci, because I think they are testing with virtual environments
> that have a lot more RAM than tripleo-ci does. It'd be good to
> understand if they have some of the transient failures that tripleo-ci
> does as well.
> 

The HA job in RDO CI is also more unstable than nonHA, although this is
usually not to do with memory contention. Most of the time that I see
the HA job fail spuriously in RDO CI, it is because of the Nova
scheduler race. I would bet that this race is the cause for the
fluctuating amount of time jobs take as well, because the recovery
mechanism for this is just to retry. Those retries can add 15 min. per
retry to the deploy. In RDO CI there is a 60min. timeout for deploy as
well. If we can't deploy to virtual machines in under an hour, to me
that is a bug. (Note, I am speaking of `openstack overcloud deploy` when
I say deploy, though start to finish can take less than an hour with
decent CPUs)

RDO CI uses the following layout:
Undercloud: 12G RAM, 4 CPUs
3x Control Nodes: 4G RAM, 1 CPU
Compute Node: 4G RAM, 1 CPU

Is there any ability in our current CI setup to auto-identify the cause
of a 

[openstack-dev] [Fuel] [QA] Major changes in fuel-qa

2016-03-07 Thread Dmitry Tyzhnenko
Hello,

Recently system_test framework had changes [0] which incompatible previous
test case design.

Now it has separate packages for:
- actions - contains classes with test actions
- core - contains core functionality and decorators
- test - contains base test and test cases
- helpers - contains some additional tools

Some re-design of system_test packages:
- actions were moved to separate packages (system_test.actions)
- core functionality was moved to core packages (system_test.core)
- added system_test.tests.base.ActionTest as main base class for tests,
all cases should inherit it.
- all test classes should inherit one or more classes with actions.

Specification can be found here: [1]

IMPORTANT!
If you use 3thd-party tests with this framework, you should:
   - use decorator @testcase for mark class as test and set groups
   - test class should inherits ActionTest and additional action classes
from system_test.actions package if required.
- remove case_factory() function from your test cases because it is not
used anymore (was replaced with @testcase)
- base_group class attribute moved to @testcase decorator, now it
should be used like this:
  @testcase(groups=['system_test', 'system_test.delete_after_deploy',
...])

P.S. This changes affect only tests in 9.0.

[0] -
https://github.com/openstack/fuel-qa/commit/82b392284a3a621aaa435c78d96dde799dfe2372
[1] -
https://github.com/openstack/fuel-specs/blob/master/specs/9.0/template-based-testcases.rst

-- 
WBR,
Dmitry T.
Fuel QA Engineer
http://www.mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Tenant networks

2016-03-07 Thread Neil Jerram
On 07/03/16 11:24, Neil Jerram wrote:
> That’s not at all what I said.  Please reread my message more carefully.

Well, I see that I could have been clearer in my answer...

> *From:*Gk Gk [mailto:ygk@gmail.com]
> *Sent:* 07 March 2016 11:17
> *To:* OpenStack Development Mailing List (not for usage questions)
> 
> *Subject:* Re: [openstack-dev] Tenant networks
>
> So, there is nothing as a tenant vlan network ?

Yes, there is such a thing as a tenant vlan network.  If a tenant 
creates a network, and the deployer has configured tenant networks to be 
implemented using VLANs, you will have a tenant vlan network.

Neil

> On Mon, Mar 7, 2016 at 4:06 PM, Neil Jerram  > wrote:
>
> On 07/03/16 09:42, Gk Gk wrote:
>  > Hi All,
>  >
>  > I am confused about tenant vlan networks. Is there a such a thing as
>  > 'Tenant vlan networks'  ? I am aware of  provider vlan networks
> ?   How
>  > can a non admin tenant user create a vlan network ?
>
> A non-admin tenant can create a network, but cannot specify how that
> network is implemented (for example by being mapped onto a VLAN).  The
> implementation of tenant networks depends on how the deployer has
> configured the system.
>
> Regards,
>  Neil
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How to fix the wrong usage of 'format' jsonschema keyword in server create API

2016-03-07 Thread Sean Dague
On 03/07/2016 01:18 AM, Alex Xu wrote:
> Hi,
> 
> Due to this regression bug https://launchpad.net/bugs/1552888, found we
> are using 'formart' jsonschema keyword in wrong way. The 'format' is not
> only for string instance, but all the types.

Can you given an example of the kinds of things that are currently being
rejected? And if we think those REST API calls are valid? I'd like to
know what we started blocking in Option 3 that no one noticed until now.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Tenant networks

2016-03-07 Thread Neil Jerram
That’s not at all what I said.  Please reread my message more carefully.

From: Gk Gk [mailto:ygk@gmail.com]
Sent: 07 March 2016 11:17
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] Tenant networks

So, there is nothing as a tenant vlan network ?


On Mon, Mar 7, 2016 at 4:06 PM, Neil Jerram 
> wrote:
On 07/03/16 09:42, Gk Gk wrote:
> Hi All,
>
> I am confused about tenant vlan networks. Is there a such a thing as
> 'Tenant vlan networks'  ? I am aware of  provider vlan networks ?   How
> can a non admin tenant user create a vlan network ?
A non-admin tenant can create a network, but cannot specify how that
network is implemented (for example by being mapped onto a VLAN).  The
implementation of tenant networks depends on how the deployer has
configured the system.

Regards,
Neil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Tenant networks

2016-03-07 Thread Gk Gk
So, there is nothing as a tenant vlan network ?


On Mon, Mar 7, 2016 at 4:06 PM, Neil Jerram 
wrote:

> On 07/03/16 09:42, Gk Gk wrote:
> > Hi All,
> >
> > I am confused about tenant vlan networks. Is there a such a thing as
> > 'Tenant vlan networks'  ? I am aware of  provider vlan networks ?   How
> > can a non admin tenant user create a vlan network ?
>
> A non-admin tenant can create a network, but cannot specify how that
> network is implemented (for example by being mapped onto a VLAN).  The
> implementation of tenant networks depends on how the deployer has
> configured the system.
>
> Regards,
> Neil
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][tempest] Timestamp service extension breaks CI

2016-03-07 Thread Kevin Benton
Yeah, it sounds like maybe there is a bug in the extension processing for
port bindings if the VNIC type isn't returned for a get. Does ML2 exhibit
the same behavior?
On Mar 7, 2016 02:08, "Salvatore Orlando"  wrote:

>
>
> On 7 March 2016 at 10:54, Gary Kotton  wrote:
>
>> There are a number of issues here:
>>
>>1. The create returns additional values, for example the 
>> binding:vnic_type,
>>whilst the get does not
>>
>> This is probably a consequence of fixing the behaviour mismatch between
> create and get.
>
>
>>
>>1. We have some unit tests that we need to change (I guess), that
>>check function parameters. An example for this is the network passed to a
>>method. With the extra extensions this is now changed. In addition to this
>>the create and the get order of the parameters is different
>>
>> Fixing those unit tests should not be a big deal. We can assert only on
> the key we wants to validate and not on the whole call.
>
>
>
>> Thanks
>> Gary
>>
>> From: Kevin Benton 
>> Reply-To: OpenStack List 
>> Date: Monday, March 7, 2016 at 11:45 AM
>>
>> To: OpenStack List 
>> Subject: Re: [openstack-dev] [Neutron][tempest] Timestamp service
>> extension breaks CI
>>
>> But that's the whole point of doing the read after the create in the
>> plugin. As long as you read after all db changes and call the dict extend
>> function, it should be the same.
>>
>> As far as order goes, python doesn't guarantee order on dictionary keys.
>> Or did I misinterpret what you meant by order?
>> On Mar 7, 2016 01:41, "Gary Kotton"  wrote:
>>
>>> Another issue that we have with the read at create is that the
>>> dictionary returned is not the same as the one returned when the is a get
>>> for the specific resource. The dictionary is also not in the same order.
>>>
>>> This is currently breaking our unit tests… By that is just another side
>>> issue
>>>
>>> From: Kevin Benton 
>>> Reply-To: OpenStack List 
>>> Date: Monday, March 7, 2016 at 11:23 AM
>>> To: OpenStack List 
>>> Subject: Re: [openstack-dev] [Neutron][tempest] Timestamp service
>>> extension breaks CI
>>>
>>> Right, it can't be done in the base right now because core plugins make
>>> DB changes after the base plugin has been called. These changes include the
>>> initial  create processing of many of the extensions so we can't call the
>>> extend_dict functions before the data many of the registered hooks are
>>> looking for even exists.
>>>
>>> So unfortunately right now it is the responsibility of the plugin to
>>> extend the result after all of the DB work is done, not just the base
>>> plugin stuff. If a plugin doesn't do it, the responses from that plugin's
>>> create calls will not be correct. It was only recently when we started
>>> adding API tests that check create responses for extensions that this bug
>>> became apparent.
>>>
>>> I agree that the extra read right now sucks and it will be worth fixing
>>> in Newton. Calling the dictionary extension processing outside of the
>>> plugin and placing it somewhere in the core before returning the API
>>> response may be possible, but the difficult part is getting the DB object
>>> to pass to the hooks without an additional read since plugins only return
>>> dicts.
>>> On Mar 7, 2016 01:06, "Gary Kotton"  wrote:
>>>
 I do not think that this is a bug in the plugin. Why are we not doing
 the changes in the base class (unless that is not possible). Having an
 extra read when a resources is created seems like a little of an overkill.
 I understand that it is what is done at the moment.
 I think that at the summit we should try and discuss how we can manage
 extensions better. Maybe the time has even come for us to consider the V3
 neutron API and to make all of the ‘default core services’ as part of the
 official API. So we will not have to do certain hacks to get the plugins to
 work.


 From: Kevin Benton 
 Reply-To: OpenStack List 
 Date: Sunday, March 6, 2016 at 11:27 PM
 To: OpenStack List 
 Subject: Re: [openstack-dev] [Neutron][tempest] Timestamp service
 extension breaks CI

 Keep in mind that fix for ML2 is the correct behavior, not a
 workaround. It was not including extension data in create calls so there
 was an API difference between a create and a get/update of the same object.
 It's now calling the extensions to let them populate their fields of the
 dict.

 If you're plugin does not exhibit the correct behavior in this case, I
 would just disable the test in question because it sounds like a bug in the
 plugin, not the test. 

  1   2   >